00:00:00.001 Started by upstream project "autotest-per-patch" build number 121258 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 21680 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.018 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.019 The recommended git tool is: git 00:00:00.019 using credential 00000000-0000-0000-0000-000000000002 00:00:00.020 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.037 Fetching changes from the remote Git repository 00:00:00.042 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.057 Using shallow fetch with depth 1 00:00:00.057 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.057 > git --version # timeout=10 00:00:00.068 > git --version # 'git version 2.39.2' 00:00:00.068 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.069 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.069 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/56/22956/3 # timeout=5 00:00:04.926 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.937 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.948 Checking out Revision 352f638cc5f3ff89bb1b1ec8306986452d7550bf (FETCH_HEAD) 00:00:04.948 > git config core.sparsecheckout # timeout=10 00:00:04.959 > git read-tree -mu HEAD # timeout=10 00:00:04.976 > git checkout -f 352f638cc5f3ff89bb1b1ec8306986452d7550bf # timeout=5 00:00:04.996 Commit message: "jenkins/jjb-config: Add ubuntu2404 to per-patch and nightly testing" 00:00:04.996 > git rev-list --no-walk f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=10 00:00:05.080 [Pipeline] Start of Pipeline 00:00:05.090 [Pipeline] library 00:00:05.091 Loading library shm_lib@master 00:00:05.092 Library shm_lib@master is cached. Copying from home. 00:00:05.106 [Pipeline] node 00:00:05.115 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.117 [Pipeline] { 00:00:05.125 [Pipeline] catchError 00:00:05.126 [Pipeline] { 00:00:05.137 [Pipeline] wrap 00:00:05.142 [Pipeline] { 00:00:05.148 [Pipeline] stage 00:00:05.149 [Pipeline] { (Prologue) 00:00:05.348 [Pipeline] sh 00:00:05.634 + logger -p user.info -t JENKINS-CI 00:00:05.657 [Pipeline] echo 00:00:05.658 Node: CYP12 00:00:05.668 [Pipeline] sh 00:00:05.973 [Pipeline] setCustomBuildProperty 00:00:05.987 [Pipeline] echo 00:00:05.989 Cleanup processes 00:00:05.994 [Pipeline] sh 00:00:06.281 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.281 735162 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.297 [Pipeline] sh 00:00:06.588 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.588 ++ grep -v 'sudo pgrep' 00:00:06.588 ++ awk '{print $1}' 00:00:06.588 + sudo kill -9 00:00:06.588 + true 00:00:06.645 [Pipeline] cleanWs 00:00:06.655 [WS-CLEANUP] Deleting project workspace... 00:00:06.655 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.663 [WS-CLEANUP] done 00:00:06.667 [Pipeline] setCustomBuildProperty 00:00:06.681 [Pipeline] sh 00:00:06.968 + sudo git config --global --replace-all safe.directory '*' 00:00:07.055 [Pipeline] nodesByLabel 00:00:07.057 Found a total of 1 nodes with the 'sorcerer' label 00:00:07.065 [Pipeline] httpRequest 00:00:07.071 HttpMethod: GET 00:00:07.071 URL: http://10.211.164.96/packages/jbp_352f638cc5f3ff89bb1b1ec8306986452d7550bf.tar.gz 00:00:07.075 Sending request to url: http://10.211.164.96/packages/jbp_352f638cc5f3ff89bb1b1ec8306986452d7550bf.tar.gz 00:00:07.079 Response Code: HTTP/1.1 200 OK 00:00:07.079 Success: Status code 200 is in the accepted range: 200,404 00:00:07.080 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_352f638cc5f3ff89bb1b1ec8306986452d7550bf.tar.gz 00:00:07.882 [Pipeline] sh 00:00:08.169 + tar --no-same-owner -xf jbp_352f638cc5f3ff89bb1b1ec8306986452d7550bf.tar.gz 00:00:08.188 [Pipeline] httpRequest 00:00:08.193 HttpMethod: GET 00:00:08.194 URL: http://10.211.164.96/packages/spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:00:08.195 Sending request to url: http://10.211.164.96/packages/spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:00:08.213 Response Code: HTTP/1.1 200 OK 00:00:08.214 Success: Status code 200 is in the accepted range: 200,404 00:00:08.214 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:00:56.446 [Pipeline] sh 00:00:56.729 + tar --no-same-owner -xf spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:00:59.284 [Pipeline] sh 00:00:59.569 + git -C spdk log --oneline -n5 00:00:59.569 8571999d8 test/scheduler: Stop moving all processes between cgroups 00:00:59.569 06472fb6d lib/idxd: fix batch size in kernel IDXD 00:00:59.569 44dcf4fb9 pkgdep/idxd: Add dependency for accel-config used in kernel IDXD 00:00:59.569 3dbaa93c1 nvmf: pass command dword 12 and 13 for write 00:00:59.569 19327fc3a bdev/nvme: use dtype/dspec for write commands 00:00:59.581 [Pipeline] } 00:00:59.597 [Pipeline] // stage 00:00:59.605 [Pipeline] stage 00:00:59.607 [Pipeline] { (Prepare) 00:00:59.624 [Pipeline] writeFile 00:00:59.638 [Pipeline] sh 00:00:59.923 + logger -p user.info -t JENKINS-CI 00:00:59.938 [Pipeline] sh 00:01:00.223 + logger -p user.info -t JENKINS-CI 00:01:00.236 [Pipeline] sh 00:01:00.521 + cat autorun-spdk.conf 00:01:00.521 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.521 SPDK_TEST_NVMF=1 00:01:00.521 SPDK_TEST_NVME_CLI=1 00:01:00.521 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:00.521 SPDK_TEST_NVMF_NICS=e810 00:01:00.521 SPDK_TEST_VFIOUSER=1 00:01:00.521 SPDK_RUN_UBSAN=1 00:01:00.521 NET_TYPE=phy 00:01:00.530 RUN_NIGHTLY=0 00:01:00.534 [Pipeline] readFile 00:01:00.557 [Pipeline] withEnv 00:01:00.559 [Pipeline] { 00:01:00.573 [Pipeline] sh 00:01:00.858 + set -ex 00:01:00.858 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:00.858 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:00.858 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.858 ++ SPDK_TEST_NVMF=1 00:01:00.858 ++ SPDK_TEST_NVME_CLI=1 00:01:00.858 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:00.858 ++ SPDK_TEST_NVMF_NICS=e810 00:01:00.858 ++ SPDK_TEST_VFIOUSER=1 00:01:00.858 ++ SPDK_RUN_UBSAN=1 00:01:00.858 ++ NET_TYPE=phy 00:01:00.858 ++ RUN_NIGHTLY=0 00:01:00.858 + case $SPDK_TEST_NVMF_NICS in 00:01:00.858 + DRIVERS=ice 00:01:00.858 + [[ tcp == \r\d\m\a ]] 00:01:00.858 + [[ -n ice ]] 00:01:00.858 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:00.858 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:08.984 rmmod: ERROR: Module irdma is not currently loaded 00:01:08.984 rmmod: ERROR: Module i40iw is not currently loaded 00:01:08.984 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:08.984 + true 00:01:08.984 + for D in $DRIVERS 00:01:08.984 + sudo modprobe ice 00:01:08.984 + exit 0 00:01:08.994 [Pipeline] } 00:01:09.005 [Pipeline] // withEnv 00:01:09.009 [Pipeline] } 00:01:09.017 [Pipeline] // stage 00:01:09.024 [Pipeline] catchError 00:01:09.025 [Pipeline] { 00:01:09.036 [Pipeline] timeout 00:01:09.036 Timeout set to expire in 40 min 00:01:09.037 [Pipeline] { 00:01:09.110 [Pipeline] stage 00:01:09.112 [Pipeline] { (Tests) 00:01:09.124 [Pipeline] sh 00:01:09.404 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.404 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.404 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.404 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:09.404 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:09.404 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:09.404 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:09.404 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:09.404 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:09.404 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:09.404 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.404 + source /etc/os-release 00:01:09.404 ++ NAME='Fedora Linux' 00:01:09.404 ++ VERSION='38 (Cloud Edition)' 00:01:09.404 ++ ID=fedora 00:01:09.404 ++ VERSION_ID=38 00:01:09.404 ++ VERSION_CODENAME= 00:01:09.404 ++ PLATFORM_ID=platform:f38 00:01:09.404 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:09.404 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:09.404 ++ LOGO=fedora-logo-icon 00:01:09.404 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:09.404 ++ HOME_URL=https://fedoraproject.org/ 00:01:09.404 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:09.404 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:09.404 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:09.404 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:09.404 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:09.404 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:09.404 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:09.404 ++ SUPPORT_END=2024-05-14 00:01:09.404 ++ VARIANT='Cloud Edition' 00:01:09.404 ++ VARIANT_ID=cloud 00:01:09.404 + uname -a 00:01:09.404 Linux spdk-cyp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:09.404 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:11.944 Hugepages 00:01:11.944 node hugesize free / total 00:01:11.944 node0 1048576kB 0 / 0 00:01:12.205 node0 2048kB 0 / 0 00:01:12.205 node1 1048576kB 0 / 0 00:01:12.205 node1 2048kB 0 / 0 00:01:12.205 00:01:12.205 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:12.205 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:12.205 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:12.205 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:12.205 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:12.205 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:12.205 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:12.205 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:12.205 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:12.205 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:12.205 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:12.205 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:12.205 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:12.205 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:12.205 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:12.205 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:12.205 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:12.205 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:12.205 + rm -f /tmp/spdk-ld-path 00:01:12.205 + source autorun-spdk.conf 00:01:12.205 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.205 ++ SPDK_TEST_NVMF=1 00:01:12.205 ++ SPDK_TEST_NVME_CLI=1 00:01:12.205 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.205 ++ SPDK_TEST_NVMF_NICS=e810 00:01:12.205 ++ SPDK_TEST_VFIOUSER=1 00:01:12.205 ++ SPDK_RUN_UBSAN=1 00:01:12.205 ++ NET_TYPE=phy 00:01:12.205 ++ RUN_NIGHTLY=0 00:01:12.205 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:12.205 + [[ -n '' ]] 00:01:12.205 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:12.205 + for M in /var/spdk/build-*-manifest.txt 00:01:12.205 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:12.205 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:12.205 + for M in /var/spdk/build-*-manifest.txt 00:01:12.205 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:12.205 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:12.205 ++ uname 00:01:12.205 + [[ Linux == \L\i\n\u\x ]] 00:01:12.205 + sudo dmesg -T 00:01:12.466 + sudo dmesg --clear 00:01:12.466 + dmesg_pid=736173 00:01:12.466 + [[ Fedora Linux == FreeBSD ]] 00:01:12.466 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:12.466 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:12.466 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:12.466 + [[ -x /usr/src/fio-static/fio ]] 00:01:12.466 + export FIO_BIN=/usr/src/fio-static/fio 00:01:12.466 + FIO_BIN=/usr/src/fio-static/fio 00:01:12.466 + sudo dmesg -Tw 00:01:12.466 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:12.466 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:12.466 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:12.466 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:12.466 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:12.466 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:12.466 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:12.466 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:12.466 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:12.466 Test configuration: 00:01:12.466 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.466 SPDK_TEST_NVMF=1 00:01:12.466 SPDK_TEST_NVME_CLI=1 00:01:12.466 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.466 SPDK_TEST_NVMF_NICS=e810 00:01:12.466 SPDK_TEST_VFIOUSER=1 00:01:12.466 SPDK_RUN_UBSAN=1 00:01:12.466 NET_TYPE=phy 00:01:12.466 RUN_NIGHTLY=0 14:37:55 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:12.466 14:37:55 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:12.466 14:37:55 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:12.466 14:37:55 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:12.466 14:37:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.466 14:37:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.466 14:37:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.466 14:37:55 -- paths/export.sh@5 -- $ export PATH 00:01:12.466 14:37:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.466 14:37:55 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:12.466 14:37:55 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:12.466 14:37:55 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714135075.XXXXXX 00:01:12.466 14:37:55 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714135075.HqcMmr 00:01:12.466 14:37:55 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:12.466 14:37:55 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:12.466 14:37:55 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:12.466 14:37:55 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:12.466 14:37:55 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:12.466 14:37:55 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:12.466 14:37:55 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:01:12.466 14:37:55 -- common/autotest_common.sh@10 -- $ set +x 00:01:12.466 14:37:55 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:12.466 14:37:55 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:01:12.466 14:37:55 -- pm/common@17 -- $ local monitor 00:01:12.466 14:37:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:12.466 14:37:55 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=736207 00:01:12.466 14:37:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:12.466 14:37:55 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=736209 00:01:12.466 14:37:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:12.466 14:37:55 -- pm/common@21 -- $ date +%s 00:01:12.466 14:37:55 -- pm/common@21 -- $ date +%s 00:01:12.466 14:37:55 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=736212 00:01:12.466 14:37:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:12.466 14:37:55 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=736215 00:01:12.466 14:37:55 -- pm/common@26 -- $ sleep 1 00:01:12.466 14:37:55 -- pm/common@21 -- $ date +%s 00:01:12.466 14:37:55 -- pm/common@21 -- $ date +%s 00:01:12.466 14:37:55 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714135075 00:01:12.466 14:37:55 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714135075 00:01:12.466 14:37:55 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714135075 00:01:12.466 14:37:55 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714135075 00:01:12.727 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714135075_collect-vmstat.pm.log 00:01:12.727 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714135075_collect-bmc-pm.bmc.pm.log 00:01:12.727 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714135075_collect-cpu-load.pm.log 00:01:12.727 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714135075_collect-cpu-temp.pm.log 00:01:13.670 14:37:56 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:01:13.670 14:37:56 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:13.670 14:37:56 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:13.670 14:37:56 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:13.670 14:37:56 -- spdk/autobuild.sh@16 -- $ date -u 00:01:13.670 Fri Apr 26 12:37:56 PM UTC 2024 00:01:13.670 14:37:56 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:13.670 v24.05-pre-449-g8571999d8 00:01:13.670 14:37:56 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:13.670 14:37:56 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:13.670 14:37:56 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:13.670 14:37:56 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:13.670 14:37:56 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:13.670 14:37:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.670 ************************************ 00:01:13.670 START TEST ubsan 00:01:13.670 ************************************ 00:01:13.670 14:37:56 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:01:13.670 using ubsan 00:01:13.670 00:01:13.670 real 0m0.001s 00:01:13.670 user 0m0.001s 00:01:13.670 sys 0m0.000s 00:01:13.670 14:37:56 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:13.670 14:37:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.670 ************************************ 00:01:13.670 END TEST ubsan 00:01:13.670 ************************************ 00:01:13.670 14:37:56 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:13.670 14:37:56 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:13.670 14:37:56 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:13.670 14:37:56 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:13.670 14:37:56 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:13.670 14:37:56 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:13.671 14:37:56 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:13.671 14:37:56 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:13.671 14:37:56 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:13.932 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:13.932 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:14.193 Using 'verbs' RDMA provider 00:01:30.051 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:42.276 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:42.276 Creating mk/config.mk...done. 00:01:42.276 Creating mk/cc.flags.mk...done. 00:01:42.276 Type 'make' to build. 00:01:42.276 14:38:24 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:42.276 14:38:24 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:42.276 14:38:24 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:42.276 14:38:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.276 ************************************ 00:01:42.276 START TEST make 00:01:42.276 ************************************ 00:01:42.276 14:38:24 -- common/autotest_common.sh@1111 -- $ make -j144 00:01:42.537 make[1]: Nothing to be done for 'all'. 00:01:43.476 The Meson build system 00:01:43.476 Version: 1.3.1 00:01:43.476 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:43.476 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:43.476 Build type: native build 00:01:43.476 Project name: libvfio-user 00:01:43.476 Project version: 0.0.1 00:01:43.476 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:43.476 C linker for the host machine: cc ld.bfd 2.39-16 00:01:43.476 Host machine cpu family: x86_64 00:01:43.476 Host machine cpu: x86_64 00:01:43.476 Run-time dependency threads found: YES 00:01:43.476 Library dl found: YES 00:01:43.476 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:43.476 Run-time dependency json-c found: YES 0.17 00:01:43.476 Run-time dependency cmocka found: YES 1.1.7 00:01:43.476 Program pytest-3 found: NO 00:01:43.476 Program flake8 found: NO 00:01:43.476 Program misspell-fixer found: NO 00:01:43.476 Program restructuredtext-lint found: NO 00:01:43.476 Program valgrind found: YES (/usr/bin/valgrind) 00:01:43.476 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:43.476 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:43.476 Compiler for C supports arguments -Wwrite-strings: YES 00:01:43.476 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:43.476 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:43.476 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:43.476 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:43.476 Build targets in project: 8 00:01:43.476 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:43.476 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:43.476 00:01:43.476 libvfio-user 0.0.1 00:01:43.476 00:01:43.476 User defined options 00:01:43.476 buildtype : debug 00:01:43.476 default_library: shared 00:01:43.476 libdir : /usr/local/lib 00:01:43.476 00:01:43.476 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:44.101 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:44.101 [1/37] Compiling C object samples/null.p/null.c.o 00:01:44.101 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:44.101 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:44.102 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:44.102 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:44.102 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:44.102 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:44.102 [8/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:44.102 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:44.102 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:44.102 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:44.102 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:44.102 [13/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:44.102 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:44.102 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:44.102 [16/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:44.102 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:44.102 [18/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:44.102 [19/37] Compiling C object samples/server.p/server.c.o 00:01:44.102 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:44.102 [21/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:44.102 [22/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:44.102 [23/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:44.102 [24/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:44.102 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:44.102 [26/37] Compiling C object samples/client.p/client.c.o 00:01:44.102 [27/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:44.102 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:44.102 [29/37] Linking target samples/client 00:01:44.102 [30/37] Linking target test/unit_tests 00:01:44.102 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:44.392 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:44.392 [33/37] Linking target samples/lspci 00:01:44.392 [34/37] Linking target samples/null 00:01:44.392 [35/37] Linking target samples/gpio-pci-idio-16 00:01:44.392 [36/37] Linking target samples/server 00:01:44.392 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:44.392 INFO: autodetecting backend as ninja 00:01:44.392 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:44.392 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:44.665 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:44.665 ninja: no work to do. 00:01:51.247 The Meson build system 00:01:51.247 Version: 1.3.1 00:01:51.247 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:51.247 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:51.247 Build type: native build 00:01:51.247 Program cat found: YES (/usr/bin/cat) 00:01:51.247 Project name: DPDK 00:01:51.247 Project version: 23.11.0 00:01:51.247 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:51.247 C linker for the host machine: cc ld.bfd 2.39-16 00:01:51.247 Host machine cpu family: x86_64 00:01:51.247 Host machine cpu: x86_64 00:01:51.247 Message: ## Building in Developer Mode ## 00:01:51.247 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:51.247 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:51.247 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:51.247 Program python3 found: YES (/usr/bin/python3) 00:01:51.247 Program cat found: YES (/usr/bin/cat) 00:01:51.247 Compiler for C supports arguments -march=native: YES 00:01:51.247 Checking for size of "void *" : 8 00:01:51.247 Checking for size of "void *" : 8 (cached) 00:01:51.247 Library m found: YES 00:01:51.247 Library numa found: YES 00:01:51.247 Has header "numaif.h" : YES 00:01:51.247 Library fdt found: NO 00:01:51.247 Library execinfo found: NO 00:01:51.247 Has header "execinfo.h" : YES 00:01:51.247 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:51.247 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:51.247 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:51.247 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:51.247 Run-time dependency openssl found: YES 3.0.9 00:01:51.247 Run-time dependency libpcap found: YES 1.10.4 00:01:51.247 Has header "pcap.h" with dependency libpcap: YES 00:01:51.247 Compiler for C supports arguments -Wcast-qual: YES 00:01:51.247 Compiler for C supports arguments -Wdeprecated: YES 00:01:51.247 Compiler for C supports arguments -Wformat: YES 00:01:51.247 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:51.247 Compiler for C supports arguments -Wformat-security: NO 00:01:51.247 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:51.247 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:51.247 Compiler for C supports arguments -Wnested-externs: YES 00:01:51.247 Compiler for C supports arguments -Wold-style-definition: YES 00:01:51.247 Compiler for C supports arguments -Wpointer-arith: YES 00:01:51.247 Compiler for C supports arguments -Wsign-compare: YES 00:01:51.247 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:51.247 Compiler for C supports arguments -Wundef: YES 00:01:51.247 Compiler for C supports arguments -Wwrite-strings: YES 00:01:51.247 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:51.247 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:51.247 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:51.247 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:51.247 Program objdump found: YES (/usr/bin/objdump) 00:01:51.247 Compiler for C supports arguments -mavx512f: YES 00:01:51.247 Checking if "AVX512 checking" compiles: YES 00:01:51.247 Fetching value of define "__SSE4_2__" : 1 00:01:51.247 Fetching value of define "__AES__" : 1 00:01:51.247 Fetching value of define "__AVX__" : 1 00:01:51.247 Fetching value of define "__AVX2__" : 1 00:01:51.247 Fetching value of define "__AVX512BW__" : 1 00:01:51.247 Fetching value of define "__AVX512CD__" : 1 00:01:51.247 Fetching value of define "__AVX512DQ__" : 1 00:01:51.247 Fetching value of define "__AVX512F__" : 1 00:01:51.247 Fetching value of define "__AVX512VL__" : 1 00:01:51.247 Fetching value of define "__PCLMUL__" : 1 00:01:51.247 Fetching value of define "__RDRND__" : 1 00:01:51.247 Fetching value of define "__RDSEED__" : 1 00:01:51.247 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:51.247 Fetching value of define "__znver1__" : (undefined) 00:01:51.247 Fetching value of define "__znver2__" : (undefined) 00:01:51.247 Fetching value of define "__znver3__" : (undefined) 00:01:51.247 Fetching value of define "__znver4__" : (undefined) 00:01:51.247 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:51.247 Message: lib/log: Defining dependency "log" 00:01:51.247 Message: lib/kvargs: Defining dependency "kvargs" 00:01:51.247 Message: lib/telemetry: Defining dependency "telemetry" 00:01:51.247 Checking for function "getentropy" : NO 00:01:51.247 Message: lib/eal: Defining dependency "eal" 00:01:51.247 Message: lib/ring: Defining dependency "ring" 00:01:51.247 Message: lib/rcu: Defining dependency "rcu" 00:01:51.247 Message: lib/mempool: Defining dependency "mempool" 00:01:51.247 Message: lib/mbuf: Defining dependency "mbuf" 00:01:51.247 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:51.247 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:51.247 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:51.247 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:51.247 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:51.247 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:51.247 Compiler for C supports arguments -mpclmul: YES 00:01:51.247 Compiler for C supports arguments -maes: YES 00:01:51.247 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:51.247 Compiler for C supports arguments -mavx512bw: YES 00:01:51.247 Compiler for C supports arguments -mavx512dq: YES 00:01:51.247 Compiler for C supports arguments -mavx512vl: YES 00:01:51.247 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:51.247 Compiler for C supports arguments -mavx2: YES 00:01:51.247 Compiler for C supports arguments -mavx: YES 00:01:51.247 Message: lib/net: Defining dependency "net" 00:01:51.247 Message: lib/meter: Defining dependency "meter" 00:01:51.247 Message: lib/ethdev: Defining dependency "ethdev" 00:01:51.247 Message: lib/pci: Defining dependency "pci" 00:01:51.247 Message: lib/cmdline: Defining dependency "cmdline" 00:01:51.247 Message: lib/hash: Defining dependency "hash" 00:01:51.247 Message: lib/timer: Defining dependency "timer" 00:01:51.247 Message: lib/compressdev: Defining dependency "compressdev" 00:01:51.247 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:51.247 Message: lib/dmadev: Defining dependency "dmadev" 00:01:51.247 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:51.247 Message: lib/power: Defining dependency "power" 00:01:51.247 Message: lib/reorder: Defining dependency "reorder" 00:01:51.247 Message: lib/security: Defining dependency "security" 00:01:51.247 Has header "linux/userfaultfd.h" : YES 00:01:51.247 Has header "linux/vduse.h" : YES 00:01:51.247 Message: lib/vhost: Defining dependency "vhost" 00:01:51.247 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:51.247 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:51.247 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:51.247 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:51.247 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:51.247 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:51.247 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:51.247 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:51.247 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:51.247 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:51.247 Program doxygen found: YES (/usr/bin/doxygen) 00:01:51.247 Configuring doxy-api-html.conf using configuration 00:01:51.247 Configuring doxy-api-man.conf using configuration 00:01:51.247 Program mandb found: YES (/usr/bin/mandb) 00:01:51.247 Program sphinx-build found: NO 00:01:51.247 Configuring rte_build_config.h using configuration 00:01:51.247 Message: 00:01:51.247 ================= 00:01:51.247 Applications Enabled 00:01:51.247 ================= 00:01:51.247 00:01:51.247 apps: 00:01:51.247 00:01:51.247 00:01:51.247 Message: 00:01:51.247 ================= 00:01:51.247 Libraries Enabled 00:01:51.247 ================= 00:01:51.247 00:01:51.247 libs: 00:01:51.247 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:51.247 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:51.247 cryptodev, dmadev, power, reorder, security, vhost, 00:01:51.247 00:01:51.247 Message: 00:01:51.247 =============== 00:01:51.247 Drivers Enabled 00:01:51.247 =============== 00:01:51.247 00:01:51.247 common: 00:01:51.247 00:01:51.247 bus: 00:01:51.247 pci, vdev, 00:01:51.247 mempool: 00:01:51.247 ring, 00:01:51.247 dma: 00:01:51.247 00:01:51.247 net: 00:01:51.247 00:01:51.247 crypto: 00:01:51.247 00:01:51.247 compress: 00:01:51.247 00:01:51.247 vdpa: 00:01:51.247 00:01:51.247 00:01:51.247 Message: 00:01:51.247 ================= 00:01:51.247 Content Skipped 00:01:51.247 ================= 00:01:51.247 00:01:51.247 apps: 00:01:51.247 dumpcap: explicitly disabled via build config 00:01:51.247 graph: explicitly disabled via build config 00:01:51.247 pdump: explicitly disabled via build config 00:01:51.247 proc-info: explicitly disabled via build config 00:01:51.247 test-acl: explicitly disabled via build config 00:01:51.247 test-bbdev: explicitly disabled via build config 00:01:51.247 test-cmdline: explicitly disabled via build config 00:01:51.247 test-compress-perf: explicitly disabled via build config 00:01:51.247 test-crypto-perf: explicitly disabled via build config 00:01:51.247 test-dma-perf: explicitly disabled via build config 00:01:51.247 test-eventdev: explicitly disabled via build config 00:01:51.247 test-fib: explicitly disabled via build config 00:01:51.247 test-flow-perf: explicitly disabled via build config 00:01:51.247 test-gpudev: explicitly disabled via build config 00:01:51.247 test-mldev: explicitly disabled via build config 00:01:51.247 test-pipeline: explicitly disabled via build config 00:01:51.247 test-pmd: explicitly disabled via build config 00:01:51.247 test-regex: explicitly disabled via build config 00:01:51.247 test-sad: explicitly disabled via build config 00:01:51.247 test-security-perf: explicitly disabled via build config 00:01:51.247 00:01:51.247 libs: 00:01:51.247 metrics: explicitly disabled via build config 00:01:51.247 acl: explicitly disabled via build config 00:01:51.247 bbdev: explicitly disabled via build config 00:01:51.247 bitratestats: explicitly disabled via build config 00:01:51.247 bpf: explicitly disabled via build config 00:01:51.247 cfgfile: explicitly disabled via build config 00:01:51.247 distributor: explicitly disabled via build config 00:01:51.247 efd: explicitly disabled via build config 00:01:51.247 eventdev: explicitly disabled via build config 00:01:51.247 dispatcher: explicitly disabled via build config 00:01:51.247 gpudev: explicitly disabled via build config 00:01:51.247 gro: explicitly disabled via build config 00:01:51.247 gso: explicitly disabled via build config 00:01:51.247 ip_frag: explicitly disabled via build config 00:01:51.247 jobstats: explicitly disabled via build config 00:01:51.247 latencystats: explicitly disabled via build config 00:01:51.247 lpm: explicitly disabled via build config 00:01:51.247 member: explicitly disabled via build config 00:01:51.247 pcapng: explicitly disabled via build config 00:01:51.247 rawdev: explicitly disabled via build config 00:01:51.247 regexdev: explicitly disabled via build config 00:01:51.247 mldev: explicitly disabled via build config 00:01:51.247 rib: explicitly disabled via build config 00:01:51.247 sched: explicitly disabled via build config 00:01:51.247 stack: explicitly disabled via build config 00:01:51.247 ipsec: explicitly disabled via build config 00:01:51.247 pdcp: explicitly disabled via build config 00:01:51.247 fib: explicitly disabled via build config 00:01:51.247 port: explicitly disabled via build config 00:01:51.247 pdump: explicitly disabled via build config 00:01:51.247 table: explicitly disabled via build config 00:01:51.247 pipeline: explicitly disabled via build config 00:01:51.247 graph: explicitly disabled via build config 00:01:51.247 node: explicitly disabled via build config 00:01:51.247 00:01:51.247 drivers: 00:01:51.247 common/cpt: not in enabled drivers build config 00:01:51.247 common/dpaax: not in enabled drivers build config 00:01:51.247 common/iavf: not in enabled drivers build config 00:01:51.247 common/idpf: not in enabled drivers build config 00:01:51.247 common/mvep: not in enabled drivers build config 00:01:51.247 common/octeontx: not in enabled drivers build config 00:01:51.247 bus/auxiliary: not in enabled drivers build config 00:01:51.247 bus/cdx: not in enabled drivers build config 00:01:51.247 bus/dpaa: not in enabled drivers build config 00:01:51.247 bus/fslmc: not in enabled drivers build config 00:01:51.247 bus/ifpga: not in enabled drivers build config 00:01:51.247 bus/platform: not in enabled drivers build config 00:01:51.247 bus/vmbus: not in enabled drivers build config 00:01:51.247 common/cnxk: not in enabled drivers build config 00:01:51.247 common/mlx5: not in enabled drivers build config 00:01:51.247 common/nfp: not in enabled drivers build config 00:01:51.247 common/qat: not in enabled drivers build config 00:01:51.247 common/sfc_efx: not in enabled drivers build config 00:01:51.247 mempool/bucket: not in enabled drivers build config 00:01:51.247 mempool/cnxk: not in enabled drivers build config 00:01:51.247 mempool/dpaa: not in enabled drivers build config 00:01:51.247 mempool/dpaa2: not in enabled drivers build config 00:01:51.247 mempool/octeontx: not in enabled drivers build config 00:01:51.247 mempool/stack: not in enabled drivers build config 00:01:51.247 dma/cnxk: not in enabled drivers build config 00:01:51.247 dma/dpaa: not in enabled drivers build config 00:01:51.247 dma/dpaa2: not in enabled drivers build config 00:01:51.247 dma/hisilicon: not in enabled drivers build config 00:01:51.247 dma/idxd: not in enabled drivers build config 00:01:51.247 dma/ioat: not in enabled drivers build config 00:01:51.247 dma/skeleton: not in enabled drivers build config 00:01:51.247 net/af_packet: not in enabled drivers build config 00:01:51.247 net/af_xdp: not in enabled drivers build config 00:01:51.247 net/ark: not in enabled drivers build config 00:01:51.247 net/atlantic: not in enabled drivers build config 00:01:51.247 net/avp: not in enabled drivers build config 00:01:51.247 net/axgbe: not in enabled drivers build config 00:01:51.247 net/bnx2x: not in enabled drivers build config 00:01:51.247 net/bnxt: not in enabled drivers build config 00:01:51.247 net/bonding: not in enabled drivers build config 00:01:51.247 net/cnxk: not in enabled drivers build config 00:01:51.247 net/cpfl: not in enabled drivers build config 00:01:51.247 net/cxgbe: not in enabled drivers build config 00:01:51.247 net/dpaa: not in enabled drivers build config 00:01:51.247 net/dpaa2: not in enabled drivers build config 00:01:51.247 net/e1000: not in enabled drivers build config 00:01:51.247 net/ena: not in enabled drivers build config 00:01:51.247 net/enetc: not in enabled drivers build config 00:01:51.247 net/enetfec: not in enabled drivers build config 00:01:51.247 net/enic: not in enabled drivers build config 00:01:51.247 net/failsafe: not in enabled drivers build config 00:01:51.247 net/fm10k: not in enabled drivers build config 00:01:51.247 net/gve: not in enabled drivers build config 00:01:51.247 net/hinic: not in enabled drivers build config 00:01:51.247 net/hns3: not in enabled drivers build config 00:01:51.247 net/i40e: not in enabled drivers build config 00:01:51.247 net/iavf: not in enabled drivers build config 00:01:51.247 net/ice: not in enabled drivers build config 00:01:51.247 net/idpf: not in enabled drivers build config 00:01:51.247 net/igc: not in enabled drivers build config 00:01:51.247 net/ionic: not in enabled drivers build config 00:01:51.247 net/ipn3ke: not in enabled drivers build config 00:01:51.247 net/ixgbe: not in enabled drivers build config 00:01:51.247 net/mana: not in enabled drivers build config 00:01:51.247 net/memif: not in enabled drivers build config 00:01:51.247 net/mlx4: not in enabled drivers build config 00:01:51.247 net/mlx5: not in enabled drivers build config 00:01:51.247 net/mvneta: not in enabled drivers build config 00:01:51.247 net/mvpp2: not in enabled drivers build config 00:01:51.247 net/netvsc: not in enabled drivers build config 00:01:51.247 net/nfb: not in enabled drivers build config 00:01:51.247 net/nfp: not in enabled drivers build config 00:01:51.247 net/ngbe: not in enabled drivers build config 00:01:51.247 net/null: not in enabled drivers build config 00:01:51.248 net/octeontx: not in enabled drivers build config 00:01:51.248 net/octeon_ep: not in enabled drivers build config 00:01:51.248 net/pcap: not in enabled drivers build config 00:01:51.248 net/pfe: not in enabled drivers build config 00:01:51.248 net/qede: not in enabled drivers build config 00:01:51.248 net/ring: not in enabled drivers build config 00:01:51.248 net/sfc: not in enabled drivers build config 00:01:51.248 net/softnic: not in enabled drivers build config 00:01:51.248 net/tap: not in enabled drivers build config 00:01:51.248 net/thunderx: not in enabled drivers build config 00:01:51.248 net/txgbe: not in enabled drivers build config 00:01:51.248 net/vdev_netvsc: not in enabled drivers build config 00:01:51.248 net/vhost: not in enabled drivers build config 00:01:51.248 net/virtio: not in enabled drivers build config 00:01:51.248 net/vmxnet3: not in enabled drivers build config 00:01:51.248 raw/*: missing internal dependency, "rawdev" 00:01:51.248 crypto/armv8: not in enabled drivers build config 00:01:51.248 crypto/bcmfs: not in enabled drivers build config 00:01:51.248 crypto/caam_jr: not in enabled drivers build config 00:01:51.248 crypto/ccp: not in enabled drivers build config 00:01:51.248 crypto/cnxk: not in enabled drivers build config 00:01:51.248 crypto/dpaa_sec: not in enabled drivers build config 00:01:51.248 crypto/dpaa2_sec: not in enabled drivers build config 00:01:51.248 crypto/ipsec_mb: not in enabled drivers build config 00:01:51.248 crypto/mlx5: not in enabled drivers build config 00:01:51.248 crypto/mvsam: not in enabled drivers build config 00:01:51.248 crypto/nitrox: not in enabled drivers build config 00:01:51.248 crypto/null: not in enabled drivers build config 00:01:51.248 crypto/octeontx: not in enabled drivers build config 00:01:51.248 crypto/openssl: not in enabled drivers build config 00:01:51.248 crypto/scheduler: not in enabled drivers build config 00:01:51.248 crypto/uadk: not in enabled drivers build config 00:01:51.248 crypto/virtio: not in enabled drivers build config 00:01:51.248 compress/isal: not in enabled drivers build config 00:01:51.248 compress/mlx5: not in enabled drivers build config 00:01:51.248 compress/octeontx: not in enabled drivers build config 00:01:51.248 compress/zlib: not in enabled drivers build config 00:01:51.248 regex/*: missing internal dependency, "regexdev" 00:01:51.248 ml/*: missing internal dependency, "mldev" 00:01:51.248 vdpa/ifc: not in enabled drivers build config 00:01:51.248 vdpa/mlx5: not in enabled drivers build config 00:01:51.248 vdpa/nfp: not in enabled drivers build config 00:01:51.248 vdpa/sfc: not in enabled drivers build config 00:01:51.248 event/*: missing internal dependency, "eventdev" 00:01:51.248 baseband/*: missing internal dependency, "bbdev" 00:01:51.248 gpu/*: missing internal dependency, "gpudev" 00:01:51.248 00:01:51.248 00:01:51.248 Build targets in project: 84 00:01:51.248 00:01:51.248 DPDK 23.11.0 00:01:51.248 00:01:51.248 User defined options 00:01:51.248 buildtype : debug 00:01:51.248 default_library : shared 00:01:51.248 libdir : lib 00:01:51.248 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:51.248 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:51.248 c_link_args : 00:01:51.248 cpu_instruction_set: native 00:01:51.248 disable_apps : test-acl,test-bbdev,test-crypto-perf,test-fib,test-pipeline,test-gpudev,test-flow-perf,pdump,dumpcap,test-sad,test-cmdline,test-eventdev,proc-info,test,test-dma-perf,test-pmd,test-mldev,test-compress-perf,test-security-perf,graph,test-regex 00:01:51.248 disable_libs : pipeline,member,eventdev,efd,bbdev,cfgfile,rib,sched,mldev,metrics,lpm,latencystats,pdump,pdcp,bpf,ipsec,fib,ip_frag,table,port,stack,gro,jobstats,regexdev,rawdev,pcapng,dispatcher,node,bitratestats,acl,gpudev,distributor,graph,gso 00:01:51.248 enable_docs : false 00:01:51.248 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:51.248 enable_kmods : false 00:01:51.248 tests : false 00:01:51.248 00:01:51.248 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:51.248 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:51.507 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:51.507 [2/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:51.507 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:51.507 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:51.507 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:51.507 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:51.507 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:51.507 [8/264] Linking static target lib/librte_kvargs.a 00:01:51.507 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:51.507 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:51.507 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:51.507 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:51.507 [13/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:51.507 [14/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:51.507 [15/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:51.507 [16/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:51.507 [17/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:51.507 [18/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:51.507 [19/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:51.507 [20/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:51.507 [21/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:51.507 [22/264] Linking static target lib/librte_log.a 00:01:51.507 [23/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:51.507 [24/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:51.507 [25/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:51.507 [26/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:51.507 [27/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:51.507 [28/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:51.507 [29/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:51.507 [30/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:51.507 [31/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:51.507 [32/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:51.769 [33/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:51.769 [34/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:51.769 [35/264] Linking static target lib/librte_pci.a 00:01:51.769 [36/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:51.769 [37/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:51.769 [38/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:51.769 [39/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:51.769 [40/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:51.769 [41/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:51.769 [42/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:51.769 [43/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:51.769 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:51.769 [45/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.769 [46/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:51.769 [47/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:51.769 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:51.769 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:51.769 [50/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.027 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:52.027 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:52.027 [53/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:52.027 [54/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:52.027 [55/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:52.027 [56/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:52.027 [57/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:52.027 [58/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:52.027 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:52.027 [60/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:52.027 [61/264] Linking static target lib/librte_meter.a 00:01:52.027 [62/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:52.027 [63/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:52.027 [64/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:52.027 [65/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:52.027 [66/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:52.027 [67/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:52.027 [68/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:52.027 [69/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:52.027 [70/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:52.027 [71/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:52.027 [72/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:52.027 [73/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:52.027 [74/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:52.027 [75/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:52.027 [76/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:52.027 [77/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:52.027 [78/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:52.027 [79/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:52.027 [80/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:52.027 [81/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:52.027 [82/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:52.027 [83/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:52.027 [84/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:52.027 [85/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:52.027 [86/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:52.027 [87/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:52.027 [88/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:52.027 [89/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:52.027 [90/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:52.027 [91/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:52.027 [92/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:52.027 [93/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:52.027 [94/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:52.027 [95/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:52.027 [96/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:52.027 [97/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:52.027 [98/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:52.027 [99/264] Linking static target lib/librte_telemetry.a 00:01:52.027 [100/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:52.027 [101/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:52.027 [102/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:52.027 [103/264] Linking static target lib/librte_rcu.a 00:01:52.027 [104/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:52.027 [105/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:52.027 [106/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:52.027 [107/264] Linking static target lib/librte_ring.a 00:01:52.027 [108/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:52.027 [109/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:52.027 [110/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:52.027 [111/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:52.027 [112/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:52.027 [113/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:52.027 [114/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:52.027 [115/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:52.027 [116/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:52.027 [117/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:52.027 [118/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:52.027 [119/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:52.027 [120/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:52.027 [121/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:52.027 [122/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:52.027 [123/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:52.027 [124/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:52.027 [125/264] Linking static target lib/librte_timer.a 00:01:52.027 [126/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:52.027 [127/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:52.027 [128/264] Linking static target lib/librte_net.a 00:01:52.027 [129/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:52.027 [130/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:52.027 [131/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:52.027 [132/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:52.027 [133/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:52.027 [134/264] Linking static target lib/librte_cmdline.a 00:01:52.027 [135/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:52.027 [136/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:52.027 [137/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:52.027 [138/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.027 [139/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:52.027 [140/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:52.027 [141/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:52.027 [142/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:52.027 [143/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:52.027 [144/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:52.027 [145/264] Linking static target lib/librte_compressdev.a 00:01:52.027 [146/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:52.027 [147/264] Linking static target lib/librte_dmadev.a 00:01:52.027 [148/264] Linking static target lib/librte_power.a 00:01:52.027 [149/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:52.287 [150/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:52.287 [151/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:52.287 [152/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:52.287 [153/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:52.287 [154/264] Linking target lib/librte_log.so.24.0 00:01:52.287 [155/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:52.287 [156/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:52.287 [157/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:52.287 [158/264] Linking static target lib/librte_reorder.a 00:01:52.287 [159/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:52.287 [160/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.287 [161/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:52.287 [162/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:52.287 [163/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:52.287 [164/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:52.287 [165/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:52.287 [166/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:52.287 [167/264] Linking static target lib/librte_mempool.a 00:01:52.287 [168/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:52.287 [169/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:52.287 [170/264] Linking static target lib/librte_eal.a 00:01:52.287 [171/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:52.287 [172/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:52.287 [173/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:52.287 [174/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:52.287 [175/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:52.287 [176/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:52.287 [177/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:52.287 [178/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:52.287 [179/264] Linking static target lib/librte_mbuf.a 00:01:52.287 [180/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:52.287 [181/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:52.287 [182/264] Linking static target lib/librte_security.a 00:01:52.287 [183/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:52.287 [184/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:52.287 [185/264] Linking static target lib/librte_hash.a 00:01:52.287 [186/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:52.287 [187/264] Linking static target drivers/librte_bus_vdev.a 00:01:52.287 [188/264] Linking target lib/librte_kvargs.so.24.0 00:01:52.287 [189/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:52.287 [190/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:52.287 [191/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.287 [192/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:52.287 [193/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.287 [194/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:52.547 [195/264] Linking static target drivers/librte_bus_pci.a 00:01:52.547 [196/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.547 [197/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:52.547 [198/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:52.547 [199/264] Linking static target lib/librte_cryptodev.a 00:01:52.547 [200/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:52.547 [201/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:52.547 [202/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:52.547 [203/264] Linking static target drivers/librte_mempool_ring.a 00:01:52.547 [204/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:52.547 [205/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:52.547 [206/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.547 [207/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.547 [208/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.547 [209/264] Linking target lib/librte_telemetry.so.24.0 00:01:52.547 [210/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.807 [211/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.807 [212/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:52.807 [213/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:52.807 [214/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.807 [215/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.068 [216/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:53.068 [217/264] Linking static target lib/librte_ethdev.a 00:01:53.068 [218/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.068 [219/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.068 [220/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.068 [221/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.329 [222/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.329 [223/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.270 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:54.270 [225/264] Linking static target lib/librte_vhost.a 00:01:54.530 [226/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.445 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.036 [228/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.979 [229/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.979 [230/264] Linking target lib/librte_eal.so.24.0 00:02:03.979 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:03.979 [232/264] Linking target lib/librte_ring.so.24.0 00:02:03.979 [233/264] Linking target lib/librte_meter.so.24.0 00:02:03.979 [234/264] Linking target lib/librte_pci.so.24.0 00:02:03.979 [235/264] Linking target lib/librte_dmadev.so.24.0 00:02:03.979 [236/264] Linking target lib/librte_timer.so.24.0 00:02:03.979 [237/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:04.241 [238/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:04.241 [239/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:04.241 [240/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:04.241 [241/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:04.241 [242/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:04.241 [243/264] Linking target lib/librte_rcu.so.24.0 00:02:04.241 [244/264] Linking target lib/librte_mempool.so.24.0 00:02:04.241 [245/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:04.503 [246/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:04.503 [247/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:04.503 [248/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:04.503 [249/264] Linking target lib/librte_mbuf.so.24.0 00:02:04.503 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:04.765 [251/264] Linking target lib/librte_compressdev.so.24.0 00:02:04.765 [252/264] Linking target lib/librte_reorder.so.24.0 00:02:04.765 [253/264] Linking target lib/librte_net.so.24.0 00:02:04.765 [254/264] Linking target lib/librte_cryptodev.so.24.0 00:02:04.765 [255/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:04.765 [256/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:04.765 [257/264] Linking target lib/librte_hash.so.24.0 00:02:04.765 [258/264] Linking target lib/librte_cmdline.so.24.0 00:02:04.765 [259/264] Linking target lib/librte_ethdev.so.24.0 00:02:04.765 [260/264] Linking target lib/librte_security.so.24.0 00:02:05.025 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:05.025 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:05.025 [263/264] Linking target lib/librte_power.so.24.0 00:02:05.025 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:05.025 INFO: autodetecting backend as ninja 00:02:05.025 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:06.409 CC lib/ut_mock/mock.o 00:02:06.409 CC lib/log/log.o 00:02:06.409 CC lib/log/log_flags.o 00:02:06.409 CC lib/log/log_deprecated.o 00:02:06.409 CC lib/ut/ut.o 00:02:06.409 LIB libspdk_ut_mock.a 00:02:06.409 SO libspdk_ut_mock.so.6.0 00:02:06.409 LIB libspdk_log.a 00:02:06.409 LIB libspdk_ut.a 00:02:06.409 SO libspdk_log.so.7.0 00:02:06.409 SO libspdk_ut.so.2.0 00:02:06.409 SYMLINK libspdk_ut_mock.so 00:02:06.409 SYMLINK libspdk_ut.so 00:02:06.409 SYMLINK libspdk_log.so 00:02:06.980 CXX lib/trace_parser/trace.o 00:02:06.980 CC lib/dma/dma.o 00:02:06.980 CC lib/ioat/ioat.o 00:02:06.980 CC lib/util/base64.o 00:02:06.980 CC lib/util/bit_array.o 00:02:06.980 CC lib/util/cpuset.o 00:02:06.980 CC lib/util/crc16.o 00:02:06.980 CC lib/util/crc32.o 00:02:06.980 CC lib/util/crc32_ieee.o 00:02:06.980 CC lib/util/crc32c.o 00:02:06.980 CC lib/util/crc64.o 00:02:06.980 CC lib/util/dif.o 00:02:06.980 CC lib/util/fd.o 00:02:06.980 CC lib/util/file.o 00:02:06.980 CC lib/util/hexlify.o 00:02:06.980 CC lib/util/iov.o 00:02:06.980 CC lib/util/math.o 00:02:06.980 CC lib/util/pipe.o 00:02:06.980 CC lib/util/strerror_tls.o 00:02:06.980 CC lib/util/string.o 00:02:06.980 CC lib/util/uuid.o 00:02:06.980 CC lib/util/fd_group.o 00:02:06.980 CC lib/util/xor.o 00:02:06.980 CC lib/util/zipf.o 00:02:06.980 CC lib/vfio_user/host/vfio_user_pci.o 00:02:06.980 CC lib/vfio_user/host/vfio_user.o 00:02:06.980 LIB libspdk_dma.a 00:02:07.242 SO libspdk_dma.so.4.0 00:02:07.242 LIB libspdk_ioat.a 00:02:07.242 SO libspdk_ioat.so.7.0 00:02:07.242 SYMLINK libspdk_dma.so 00:02:07.242 SYMLINK libspdk_ioat.so 00:02:07.242 LIB libspdk_vfio_user.a 00:02:07.242 SO libspdk_vfio_user.so.5.0 00:02:07.503 LIB libspdk_util.a 00:02:07.503 SYMLINK libspdk_vfio_user.so 00:02:07.503 SO libspdk_util.so.9.0 00:02:07.503 SYMLINK libspdk_util.so 00:02:07.764 LIB libspdk_trace_parser.a 00:02:07.764 SO libspdk_trace_parser.so.5.0 00:02:07.764 SYMLINK libspdk_trace_parser.so 00:02:08.023 CC lib/env_dpdk/env.o 00:02:08.023 CC lib/env_dpdk/memory.o 00:02:08.023 CC lib/env_dpdk/pci.o 00:02:08.023 CC lib/env_dpdk/init.o 00:02:08.023 CC lib/env_dpdk/threads.o 00:02:08.023 CC lib/env_dpdk/pci_ioat.o 00:02:08.023 CC lib/env_dpdk/pci_virtio.o 00:02:08.023 CC lib/env_dpdk/pci_vmd.o 00:02:08.023 CC lib/env_dpdk/pci_idxd.o 00:02:08.023 CC lib/env_dpdk/pci_event.o 00:02:08.023 CC lib/env_dpdk/sigbus_handler.o 00:02:08.023 CC lib/env_dpdk/pci_dpdk.o 00:02:08.023 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:08.023 CC lib/rdma/common.o 00:02:08.023 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:08.023 CC lib/rdma/rdma_verbs.o 00:02:08.023 CC lib/vmd/vmd.o 00:02:08.023 CC lib/vmd/led.o 00:02:08.023 CC lib/json/json_parse.o 00:02:08.023 CC lib/json/json_util.o 00:02:08.023 CC lib/json/json_write.o 00:02:08.023 CC lib/conf/conf.o 00:02:08.023 CC lib/idxd/idxd.o 00:02:08.023 CC lib/idxd/idxd_user.o 00:02:08.282 LIB libspdk_rdma.a 00:02:08.282 LIB libspdk_conf.a 00:02:08.282 SO libspdk_rdma.so.6.0 00:02:08.282 SO libspdk_conf.so.6.0 00:02:08.282 LIB libspdk_json.a 00:02:08.282 SYMLINK libspdk_rdma.so 00:02:08.282 SO libspdk_json.so.6.0 00:02:08.282 SYMLINK libspdk_conf.so 00:02:08.282 SYMLINK libspdk_json.so 00:02:08.543 LIB libspdk_idxd.a 00:02:08.543 SO libspdk_idxd.so.12.0 00:02:08.543 LIB libspdk_vmd.a 00:02:08.543 SYMLINK libspdk_idxd.so 00:02:08.543 SO libspdk_vmd.so.6.0 00:02:08.543 SYMLINK libspdk_vmd.so 00:02:08.804 CC lib/jsonrpc/jsonrpc_server.o 00:02:08.804 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:08.804 CC lib/jsonrpc/jsonrpc_client.o 00:02:08.804 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:09.064 LIB libspdk_jsonrpc.a 00:02:09.064 SO libspdk_jsonrpc.so.6.0 00:02:09.064 SYMLINK libspdk_jsonrpc.so 00:02:09.064 LIB libspdk_env_dpdk.a 00:02:09.064 SO libspdk_env_dpdk.so.14.0 00:02:09.324 SYMLINK libspdk_env_dpdk.so 00:02:09.324 CC lib/rpc/rpc.o 00:02:09.584 LIB libspdk_rpc.a 00:02:09.584 SO libspdk_rpc.so.6.0 00:02:09.844 SYMLINK libspdk_rpc.so 00:02:10.105 CC lib/trace/trace.o 00:02:10.105 CC lib/trace/trace_flags.o 00:02:10.105 CC lib/trace/trace_rpc.o 00:02:10.105 CC lib/notify/notify.o 00:02:10.105 CC lib/notify/notify_rpc.o 00:02:10.105 CC lib/keyring/keyring.o 00:02:10.105 CC lib/keyring/keyring_rpc.o 00:02:10.366 LIB libspdk_notify.a 00:02:10.366 SO libspdk_notify.so.6.0 00:02:10.366 LIB libspdk_trace.a 00:02:10.366 LIB libspdk_keyring.a 00:02:10.366 SO libspdk_trace.so.10.0 00:02:10.366 SO libspdk_keyring.so.1.0 00:02:10.366 SYMLINK libspdk_notify.so 00:02:10.366 SYMLINK libspdk_trace.so 00:02:10.366 SYMLINK libspdk_keyring.so 00:02:10.936 CC lib/thread/thread.o 00:02:10.936 CC lib/thread/iobuf.o 00:02:10.936 CC lib/sock/sock.o 00:02:10.936 CC lib/sock/sock_rpc.o 00:02:11.196 LIB libspdk_sock.a 00:02:11.196 SO libspdk_sock.so.9.0 00:02:11.196 SYMLINK libspdk_sock.so 00:02:11.766 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:11.766 CC lib/nvme/nvme_ctrlr.o 00:02:11.766 CC lib/nvme/nvme_fabric.o 00:02:11.766 CC lib/nvme/nvme_ns_cmd.o 00:02:11.766 CC lib/nvme/nvme_ns.o 00:02:11.766 CC lib/nvme/nvme_pcie_common.o 00:02:11.766 CC lib/nvme/nvme_pcie.o 00:02:11.766 CC lib/nvme/nvme_qpair.o 00:02:11.766 CC lib/nvme/nvme.o 00:02:11.766 CC lib/nvme/nvme_quirks.o 00:02:11.766 CC lib/nvme/nvme_transport.o 00:02:11.766 CC lib/nvme/nvme_discovery.o 00:02:11.766 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:11.766 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:11.766 CC lib/nvme/nvme_opal.o 00:02:11.766 CC lib/nvme/nvme_tcp.o 00:02:11.766 CC lib/nvme/nvme_io_msg.o 00:02:11.766 CC lib/nvme/nvme_poll_group.o 00:02:11.766 CC lib/nvme/nvme_zns.o 00:02:11.766 CC lib/nvme/nvme_stubs.o 00:02:11.766 CC lib/nvme/nvme_auth.o 00:02:11.766 CC lib/nvme/nvme_cuse.o 00:02:11.766 CC lib/nvme/nvme_vfio_user.o 00:02:11.766 CC lib/nvme/nvme_rdma.o 00:02:12.025 LIB libspdk_thread.a 00:02:12.025 SO libspdk_thread.so.10.0 00:02:12.025 SYMLINK libspdk_thread.so 00:02:12.598 CC lib/blob/blobstore.o 00:02:12.598 CC lib/blob/request.o 00:02:12.598 CC lib/blob/zeroes.o 00:02:12.598 CC lib/blob/blob_bs_dev.o 00:02:12.598 CC lib/init/json_config.o 00:02:12.598 CC lib/init/subsystem.o 00:02:12.598 CC lib/init/subsystem_rpc.o 00:02:12.598 CC lib/init/rpc.o 00:02:12.598 CC lib/vfu_tgt/tgt_endpoint.o 00:02:12.598 CC lib/accel/accel.o 00:02:12.598 CC lib/virtio/virtio.o 00:02:12.598 CC lib/vfu_tgt/tgt_rpc.o 00:02:12.598 CC lib/accel/accel_rpc.o 00:02:12.598 CC lib/virtio/virtio_vhost_user.o 00:02:12.598 CC lib/accel/accel_sw.o 00:02:12.598 CC lib/virtio/virtio_vfio_user.o 00:02:12.598 CC lib/virtio/virtio_pci.o 00:02:12.598 LIB libspdk_init.a 00:02:12.859 SO libspdk_init.so.5.0 00:02:12.859 LIB libspdk_virtio.a 00:02:12.859 LIB libspdk_vfu_tgt.a 00:02:12.859 SYMLINK libspdk_init.so 00:02:12.859 SO libspdk_vfu_tgt.so.3.0 00:02:12.859 SO libspdk_virtio.so.7.0 00:02:12.859 SYMLINK libspdk_vfu_tgt.so 00:02:12.859 SYMLINK libspdk_virtio.so 00:02:13.120 CC lib/event/app.o 00:02:13.120 CC lib/event/reactor.o 00:02:13.120 CC lib/event/app_rpc.o 00:02:13.120 CC lib/event/log_rpc.o 00:02:13.120 CC lib/event/scheduler_static.o 00:02:13.382 LIB libspdk_accel.a 00:02:13.382 SO libspdk_accel.so.15.0 00:02:13.382 LIB libspdk_nvme.a 00:02:13.382 SYMLINK libspdk_accel.so 00:02:13.642 SO libspdk_nvme.so.13.0 00:02:13.642 LIB libspdk_event.a 00:02:13.642 SO libspdk_event.so.13.0 00:02:13.642 SYMLINK libspdk_event.so 00:02:13.904 CC lib/bdev/bdev.o 00:02:13.904 CC lib/bdev/bdev_rpc.o 00:02:13.904 CC lib/bdev/bdev_zone.o 00:02:13.904 CC lib/bdev/part.o 00:02:13.904 CC lib/bdev/scsi_nvme.o 00:02:13.904 SYMLINK libspdk_nvme.so 00:02:14.848 LIB libspdk_blob.a 00:02:14.848 SO libspdk_blob.so.11.0 00:02:14.848 SYMLINK libspdk_blob.so 00:02:15.420 CC lib/blobfs/blobfs.o 00:02:15.420 CC lib/lvol/lvol.o 00:02:15.420 CC lib/blobfs/tree.o 00:02:15.991 LIB libspdk_bdev.a 00:02:15.991 LIB libspdk_blobfs.a 00:02:15.991 SO libspdk_blobfs.so.10.0 00:02:15.991 SO libspdk_bdev.so.15.0 00:02:15.991 LIB libspdk_lvol.a 00:02:16.251 SO libspdk_lvol.so.10.0 00:02:16.251 SYMLINK libspdk_blobfs.so 00:02:16.251 SYMLINK libspdk_bdev.so 00:02:16.251 SYMLINK libspdk_lvol.so 00:02:16.511 CC lib/ublk/ublk.o 00:02:16.511 CC lib/ublk/ublk_rpc.o 00:02:16.511 CC lib/nvmf/ctrlr.o 00:02:16.511 CC lib/nvmf/ctrlr_discovery.o 00:02:16.511 CC lib/nbd/nbd.o 00:02:16.511 CC lib/nvmf/ctrlr_bdev.o 00:02:16.511 CC lib/scsi/dev.o 00:02:16.511 CC lib/nbd/nbd_rpc.o 00:02:16.511 CC lib/scsi/lun.o 00:02:16.511 CC lib/nvmf/subsystem.o 00:02:16.511 CC lib/ftl/ftl_core.o 00:02:16.511 CC lib/nvmf/nvmf.o 00:02:16.511 CC lib/scsi/port.o 00:02:16.511 CC lib/ftl/ftl_init.o 00:02:16.511 CC lib/nvmf/nvmf_rpc.o 00:02:16.511 CC lib/nvmf/tcp.o 00:02:16.511 CC lib/scsi/scsi.o 00:02:16.511 CC lib/ftl/ftl_layout.o 00:02:16.511 CC lib/nvmf/transport.o 00:02:16.511 CC lib/scsi/scsi_bdev.o 00:02:16.511 CC lib/ftl/ftl_debug.o 00:02:16.511 CC lib/scsi/scsi_pr.o 00:02:16.511 CC lib/ftl/ftl_io.o 00:02:16.511 CC lib/nvmf/vfio_user.o 00:02:16.511 CC lib/scsi/scsi_rpc.o 00:02:16.511 CC lib/nvmf/rdma.o 00:02:16.511 CC lib/ftl/ftl_sb.o 00:02:16.511 CC lib/scsi/task.o 00:02:16.511 CC lib/ftl/ftl_l2p.o 00:02:16.511 CC lib/ftl/ftl_l2p_flat.o 00:02:16.511 CC lib/ftl/ftl_nv_cache.o 00:02:16.511 CC lib/ftl/ftl_band.o 00:02:16.511 CC lib/ftl/ftl_band_ops.o 00:02:16.511 CC lib/ftl/ftl_writer.o 00:02:16.511 CC lib/ftl/ftl_rq.o 00:02:16.511 CC lib/ftl/ftl_reloc.o 00:02:16.511 CC lib/ftl/ftl_l2p_cache.o 00:02:16.511 CC lib/ftl/ftl_p2l.o 00:02:16.511 CC lib/ftl/mngt/ftl_mngt.o 00:02:16.511 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:16.511 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:16.511 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:16.511 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:16.511 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:16.511 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:16.511 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:16.511 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:16.511 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:16.511 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:16.511 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:16.511 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:16.511 CC lib/ftl/utils/ftl_md.o 00:02:16.511 CC lib/ftl/utils/ftl_conf.o 00:02:16.511 CC lib/ftl/utils/ftl_mempool.o 00:02:16.511 CC lib/ftl/utils/ftl_bitmap.o 00:02:16.511 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:16.511 CC lib/ftl/utils/ftl_property.o 00:02:16.511 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:16.511 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:16.511 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:16.511 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:16.511 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:16.511 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:16.511 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:16.511 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:16.511 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:16.511 CC lib/ftl/base/ftl_base_dev.o 00:02:16.511 CC lib/ftl/base/ftl_base_bdev.o 00:02:16.511 CC lib/ftl/ftl_trace.o 00:02:17.080 LIB libspdk_nbd.a 00:02:17.080 SO libspdk_nbd.so.7.0 00:02:17.080 LIB libspdk_scsi.a 00:02:17.080 SYMLINK libspdk_nbd.so 00:02:17.080 SO libspdk_scsi.so.9.0 00:02:17.080 LIB libspdk_ublk.a 00:02:17.080 SYMLINK libspdk_scsi.so 00:02:17.341 SO libspdk_ublk.so.3.0 00:02:17.341 SYMLINK libspdk_ublk.so 00:02:17.341 LIB libspdk_ftl.a 00:02:17.601 CC lib/vhost/vhost.o 00:02:17.601 CC lib/vhost/vhost_blk.o 00:02:17.601 CC lib/iscsi/conn.o 00:02:17.601 CC lib/vhost/vhost_rpc.o 00:02:17.601 CC lib/iscsi/iscsi.o 00:02:17.601 CC lib/vhost/vhost_scsi.o 00:02:17.601 CC lib/iscsi/init_grp.o 00:02:17.601 CC lib/vhost/rte_vhost_user.o 00:02:17.601 CC lib/iscsi/param.o 00:02:17.601 CC lib/iscsi/md5.o 00:02:17.601 CC lib/iscsi/portal_grp.o 00:02:17.601 CC lib/iscsi/iscsi_subsystem.o 00:02:17.601 CC lib/iscsi/tgt_node.o 00:02:17.601 CC lib/iscsi/iscsi_rpc.o 00:02:17.601 CC lib/iscsi/task.o 00:02:17.601 SO libspdk_ftl.so.9.0 00:02:17.861 SYMLINK libspdk_ftl.so 00:02:18.432 LIB libspdk_nvmf.a 00:02:18.432 SO libspdk_nvmf.so.18.0 00:02:18.432 LIB libspdk_vhost.a 00:02:18.432 SO libspdk_vhost.so.8.0 00:02:18.692 SYMLINK libspdk_nvmf.so 00:02:18.692 SYMLINK libspdk_vhost.so 00:02:18.692 LIB libspdk_iscsi.a 00:02:18.692 SO libspdk_iscsi.so.8.0 00:02:18.952 SYMLINK libspdk_iscsi.so 00:02:19.551 CC module/vfu_device/vfu_virtio.o 00:02:19.551 CC module/vfu_device/vfu_virtio_blk.o 00:02:19.551 CC module/env_dpdk/env_dpdk_rpc.o 00:02:19.551 CC module/vfu_device/vfu_virtio_scsi.o 00:02:19.551 CC module/vfu_device/vfu_virtio_rpc.o 00:02:19.551 CC module/sock/posix/posix.o 00:02:19.551 CC module/blob/bdev/blob_bdev.o 00:02:19.551 CC module/scheduler/gscheduler/gscheduler.o 00:02:19.551 CC module/keyring/file/keyring.o 00:02:19.551 LIB libspdk_env_dpdk_rpc.a 00:02:19.551 CC module/accel/dsa/accel_dsa.o 00:02:19.551 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:19.551 CC module/keyring/file/keyring_rpc.o 00:02:19.551 CC module/accel/dsa/accel_dsa_rpc.o 00:02:19.551 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:19.551 CC module/accel/error/accel_error.o 00:02:19.551 CC module/accel/error/accel_error_rpc.o 00:02:19.551 CC module/accel/iaa/accel_iaa.o 00:02:19.551 CC module/accel/ioat/accel_ioat.o 00:02:19.551 CC module/accel/ioat/accel_ioat_rpc.o 00:02:19.551 CC module/accel/iaa/accel_iaa_rpc.o 00:02:19.839 SO libspdk_env_dpdk_rpc.so.6.0 00:02:19.839 SYMLINK libspdk_env_dpdk_rpc.so 00:02:19.839 LIB libspdk_scheduler_gscheduler.a 00:02:19.839 LIB libspdk_scheduler_dpdk_governor.a 00:02:19.839 LIB libspdk_keyring_file.a 00:02:19.839 SO libspdk_scheduler_gscheduler.so.4.0 00:02:19.839 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:19.839 LIB libspdk_accel_ioat.a 00:02:19.839 LIB libspdk_accel_error.a 00:02:19.839 LIB libspdk_scheduler_dynamic.a 00:02:19.839 SO libspdk_keyring_file.so.1.0 00:02:19.839 LIB libspdk_accel_iaa.a 00:02:19.839 SO libspdk_accel_ioat.so.6.0 00:02:19.839 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:19.839 SO libspdk_scheduler_dynamic.so.4.0 00:02:19.839 LIB libspdk_accel_dsa.a 00:02:19.839 LIB libspdk_blob_bdev.a 00:02:19.839 SO libspdk_accel_error.so.2.0 00:02:19.839 SYMLINK libspdk_scheduler_gscheduler.so 00:02:19.839 SO libspdk_accel_iaa.so.3.0 00:02:19.839 SYMLINK libspdk_keyring_file.so 00:02:19.839 SO libspdk_blob_bdev.so.11.0 00:02:19.839 SO libspdk_accel_dsa.so.5.0 00:02:20.100 SYMLINK libspdk_accel_ioat.so 00:02:20.100 SYMLINK libspdk_scheduler_dynamic.so 00:02:20.100 SYMLINK libspdk_accel_error.so 00:02:20.100 SYMLINK libspdk_accel_iaa.so 00:02:20.100 SYMLINK libspdk_blob_bdev.so 00:02:20.100 SYMLINK libspdk_accel_dsa.so 00:02:20.100 LIB libspdk_vfu_device.a 00:02:20.100 SO libspdk_vfu_device.so.3.0 00:02:20.100 SYMLINK libspdk_vfu_device.so 00:02:20.359 LIB libspdk_sock_posix.a 00:02:20.359 SO libspdk_sock_posix.so.6.0 00:02:20.359 SYMLINK libspdk_sock_posix.so 00:02:20.616 CC module/bdev/lvol/vbdev_lvol.o 00:02:20.616 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:20.616 CC module/bdev/delay/vbdev_delay.o 00:02:20.616 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:20.616 CC module/bdev/nvme/bdev_nvme.o 00:02:20.616 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:20.616 CC module/bdev/error/vbdev_error.o 00:02:20.617 CC module/bdev/raid/bdev_raid_rpc.o 00:02:20.617 CC module/bdev/raid/bdev_raid.o 00:02:20.617 CC module/bdev/nvme/bdev_mdns_client.o 00:02:20.617 CC module/bdev/nvme/nvme_rpc.o 00:02:20.617 CC module/bdev/nvme/vbdev_opal.o 00:02:20.617 CC module/bdev/error/vbdev_error_rpc.o 00:02:20.617 CC module/bdev/raid/bdev_raid_sb.o 00:02:20.617 CC module/bdev/raid/raid0.o 00:02:20.617 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:20.617 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:20.617 CC module/bdev/raid/raid1.o 00:02:20.617 CC module/bdev/split/vbdev_split.o 00:02:20.617 CC module/bdev/split/vbdev_split_rpc.o 00:02:20.617 CC module/bdev/raid/concat.o 00:02:20.617 CC module/bdev/null/bdev_null.o 00:02:20.617 CC module/bdev/null/bdev_null_rpc.o 00:02:20.617 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:20.617 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:20.617 CC module/bdev/malloc/bdev_malloc.o 00:02:20.617 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:20.617 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:20.617 CC module/blobfs/bdev/blobfs_bdev.o 00:02:20.617 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:20.617 CC module/bdev/passthru/vbdev_passthru.o 00:02:20.617 CC module/bdev/gpt/gpt.o 00:02:20.617 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:20.617 CC module/bdev/ftl/bdev_ftl.o 00:02:20.617 CC module/bdev/gpt/vbdev_gpt.o 00:02:20.617 CC module/bdev/aio/bdev_aio.o 00:02:20.617 CC module/bdev/aio/bdev_aio_rpc.o 00:02:20.617 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:20.617 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:20.617 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:20.617 CC module/bdev/iscsi/bdev_iscsi.o 00:02:20.617 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:20.876 LIB libspdk_blobfs_bdev.a 00:02:20.876 SO libspdk_blobfs_bdev.so.6.0 00:02:20.876 LIB libspdk_bdev_split.a 00:02:20.876 LIB libspdk_bdev_null.a 00:02:20.876 LIB libspdk_bdev_error.a 00:02:20.876 SO libspdk_bdev_null.so.6.0 00:02:20.876 SYMLINK libspdk_blobfs_bdev.so 00:02:20.876 SO libspdk_bdev_split.so.6.0 00:02:20.876 LIB libspdk_bdev_gpt.a 00:02:20.876 SO libspdk_bdev_error.so.6.0 00:02:20.876 LIB libspdk_bdev_ftl.a 00:02:20.876 LIB libspdk_bdev_passthru.a 00:02:20.876 LIB libspdk_bdev_zone_block.a 00:02:20.876 SO libspdk_bdev_gpt.so.6.0 00:02:20.876 LIB libspdk_bdev_malloc.a 00:02:20.876 SO libspdk_bdev_ftl.so.6.0 00:02:20.876 SYMLINK libspdk_bdev_split.so 00:02:20.876 LIB libspdk_bdev_delay.a 00:02:20.876 LIB libspdk_bdev_aio.a 00:02:20.876 SYMLINK libspdk_bdev_null.so 00:02:20.876 SO libspdk_bdev_passthru.so.6.0 00:02:20.876 LIB libspdk_bdev_iscsi.a 00:02:20.876 SYMLINK libspdk_bdev_error.so 00:02:21.136 SO libspdk_bdev_delay.so.6.0 00:02:21.136 SO libspdk_bdev_zone_block.so.6.0 00:02:21.136 SO libspdk_bdev_malloc.so.6.0 00:02:21.136 SO libspdk_bdev_aio.so.6.0 00:02:21.136 SYMLINK libspdk_bdev_gpt.so 00:02:21.136 SO libspdk_bdev_iscsi.so.6.0 00:02:21.136 LIB libspdk_bdev_lvol.a 00:02:21.136 SYMLINK libspdk_bdev_ftl.so 00:02:21.136 SYMLINK libspdk_bdev_passthru.so 00:02:21.136 SYMLINK libspdk_bdev_aio.so 00:02:21.136 SYMLINK libspdk_bdev_delay.so 00:02:21.136 SYMLINK libspdk_bdev_zone_block.so 00:02:21.136 SYMLINK libspdk_bdev_malloc.so 00:02:21.136 SO libspdk_bdev_lvol.so.6.0 00:02:21.136 LIB libspdk_bdev_virtio.a 00:02:21.136 SYMLINK libspdk_bdev_iscsi.so 00:02:21.136 SO libspdk_bdev_virtio.so.6.0 00:02:21.136 SYMLINK libspdk_bdev_lvol.so 00:02:21.136 SYMLINK libspdk_bdev_virtio.so 00:02:21.396 LIB libspdk_bdev_raid.a 00:02:21.396 SO libspdk_bdev_raid.so.6.0 00:02:21.656 SYMLINK libspdk_bdev_raid.so 00:02:22.597 LIB libspdk_bdev_nvme.a 00:02:22.597 SO libspdk_bdev_nvme.so.7.0 00:02:22.597 SYMLINK libspdk_bdev_nvme.so 00:02:23.169 CC module/event/subsystems/iobuf/iobuf.o 00:02:23.169 CC module/event/subsystems/vmd/vmd.o 00:02:23.169 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:23.169 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:23.169 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:23.169 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:23.169 CC module/event/subsystems/keyring/keyring.o 00:02:23.169 CC module/event/subsystems/scheduler/scheduler.o 00:02:23.169 CC module/event/subsystems/sock/sock.o 00:02:23.430 LIB libspdk_event_sock.a 00:02:23.430 LIB libspdk_event_vhost_blk.a 00:02:23.430 LIB libspdk_event_vfu_tgt.a 00:02:23.430 LIB libspdk_event_vmd.a 00:02:23.430 LIB libspdk_event_keyring.a 00:02:23.430 LIB libspdk_event_iobuf.a 00:02:23.430 LIB libspdk_event_scheduler.a 00:02:23.430 SO libspdk_event_vhost_blk.so.3.0 00:02:23.430 SO libspdk_event_vfu_tgt.so.3.0 00:02:23.430 SO libspdk_event_sock.so.5.0 00:02:23.430 SO libspdk_event_vmd.so.6.0 00:02:23.430 SO libspdk_event_keyring.so.1.0 00:02:23.430 SO libspdk_event_iobuf.so.3.0 00:02:23.430 SO libspdk_event_scheduler.so.4.0 00:02:23.430 SYMLINK libspdk_event_vfu_tgt.so 00:02:23.430 SYMLINK libspdk_event_vhost_blk.so 00:02:23.430 SYMLINK libspdk_event_sock.so 00:02:23.430 SYMLINK libspdk_event_keyring.so 00:02:23.430 SYMLINK libspdk_event_vmd.so 00:02:23.691 SYMLINK libspdk_event_iobuf.so 00:02:23.691 SYMLINK libspdk_event_scheduler.so 00:02:23.952 CC module/event/subsystems/accel/accel.o 00:02:23.952 LIB libspdk_event_accel.a 00:02:24.211 SO libspdk_event_accel.so.6.0 00:02:24.211 SYMLINK libspdk_event_accel.so 00:02:24.471 CC module/event/subsystems/bdev/bdev.o 00:02:24.731 LIB libspdk_event_bdev.a 00:02:24.731 SO libspdk_event_bdev.so.6.0 00:02:24.731 SYMLINK libspdk_event_bdev.so 00:02:25.302 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:25.302 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:25.302 CC module/event/subsystems/ublk/ublk.o 00:02:25.302 CC module/event/subsystems/scsi/scsi.o 00:02:25.302 CC module/event/subsystems/nbd/nbd.o 00:02:25.302 LIB libspdk_event_ublk.a 00:02:25.302 LIB libspdk_event_nbd.a 00:02:25.302 SO libspdk_event_ublk.so.3.0 00:02:25.302 LIB libspdk_event_scsi.a 00:02:25.302 SO libspdk_event_nbd.so.6.0 00:02:25.302 LIB libspdk_event_nvmf.a 00:02:25.302 SO libspdk_event_scsi.so.6.0 00:02:25.302 SYMLINK libspdk_event_ublk.so 00:02:25.561 SO libspdk_event_nvmf.so.6.0 00:02:25.561 SYMLINK libspdk_event_nbd.so 00:02:25.561 SYMLINK libspdk_event_scsi.so 00:02:25.561 SYMLINK libspdk_event_nvmf.so 00:02:25.821 CC module/event/subsystems/iscsi/iscsi.o 00:02:25.821 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:26.082 LIB libspdk_event_vhost_scsi.a 00:02:26.082 LIB libspdk_event_iscsi.a 00:02:26.082 SO libspdk_event_vhost_scsi.so.3.0 00:02:26.082 SO libspdk_event_iscsi.so.6.0 00:02:26.082 SYMLINK libspdk_event_vhost_scsi.so 00:02:26.082 SYMLINK libspdk_event_iscsi.so 00:02:26.342 SO libspdk.so.6.0 00:02:26.342 SYMLINK libspdk.so 00:02:26.921 CC app/spdk_nvme_discover/discovery_aer.o 00:02:26.921 CXX app/trace/trace.o 00:02:26.921 CC app/spdk_lspci/spdk_lspci.o 00:02:26.921 CC app/trace_record/trace_record.o 00:02:26.921 CC app/spdk_nvme_identify/identify.o 00:02:26.921 TEST_HEADER include/spdk/accel.h 00:02:26.922 CC test/rpc_client/rpc_client_test.o 00:02:26.922 TEST_HEADER include/spdk/accel_module.h 00:02:26.922 TEST_HEADER include/spdk/barrier.h 00:02:26.922 TEST_HEADER include/spdk/assert.h 00:02:26.922 CC app/spdk_nvme_perf/perf.o 00:02:26.922 TEST_HEADER include/spdk/base64.h 00:02:26.922 TEST_HEADER include/spdk/bdev.h 00:02:26.922 TEST_HEADER include/spdk/bdev_module.h 00:02:26.922 TEST_HEADER include/spdk/bdev_zone.h 00:02:26.922 CC app/spdk_top/spdk_top.o 00:02:26.922 TEST_HEADER include/spdk/bit_array.h 00:02:26.922 TEST_HEADER include/spdk/bit_pool.h 00:02:26.922 TEST_HEADER include/spdk/blob_bdev.h 00:02:26.922 TEST_HEADER include/spdk/blobfs.h 00:02:26.922 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:26.922 TEST_HEADER include/spdk/blob.h 00:02:26.922 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:26.922 TEST_HEADER include/spdk/conf.h 00:02:26.922 TEST_HEADER include/spdk/cpuset.h 00:02:26.922 TEST_HEADER include/spdk/config.h 00:02:26.922 TEST_HEADER include/spdk/crc16.h 00:02:26.922 CC app/nvmf_tgt/nvmf_main.o 00:02:26.922 TEST_HEADER include/spdk/crc32.h 00:02:26.922 CC app/vhost/vhost.o 00:02:26.922 TEST_HEADER include/spdk/dif.h 00:02:26.922 TEST_HEADER include/spdk/dma.h 00:02:26.922 TEST_HEADER include/spdk/crc64.h 00:02:26.922 TEST_HEADER include/spdk/endian.h 00:02:26.922 TEST_HEADER include/spdk/env_dpdk.h 00:02:26.922 TEST_HEADER include/spdk/event.h 00:02:26.922 TEST_HEADER include/spdk/env.h 00:02:26.922 TEST_HEADER include/spdk/fd_group.h 00:02:26.922 TEST_HEADER include/spdk/fd.h 00:02:26.922 TEST_HEADER include/spdk/file.h 00:02:26.922 TEST_HEADER include/spdk/ftl.h 00:02:26.922 TEST_HEADER include/spdk/gpt_spec.h 00:02:26.922 TEST_HEADER include/spdk/hexlify.h 00:02:26.922 CC app/spdk_dd/spdk_dd.o 00:02:26.922 TEST_HEADER include/spdk/histogram_data.h 00:02:26.922 TEST_HEADER include/spdk/idxd.h 00:02:26.922 TEST_HEADER include/spdk/idxd_spec.h 00:02:26.922 TEST_HEADER include/spdk/init.h 00:02:26.922 CC app/iscsi_tgt/iscsi_tgt.o 00:02:26.922 TEST_HEADER include/spdk/ioat_spec.h 00:02:26.922 TEST_HEADER include/spdk/iscsi_spec.h 00:02:26.922 TEST_HEADER include/spdk/ioat.h 00:02:26.922 TEST_HEADER include/spdk/jsonrpc.h 00:02:26.922 TEST_HEADER include/spdk/keyring.h 00:02:26.922 TEST_HEADER include/spdk/json.h 00:02:26.922 TEST_HEADER include/spdk/keyring_module.h 00:02:26.922 TEST_HEADER include/spdk/likely.h 00:02:26.922 TEST_HEADER include/spdk/log.h 00:02:26.922 TEST_HEADER include/spdk/memory.h 00:02:26.922 TEST_HEADER include/spdk/lvol.h 00:02:26.922 TEST_HEADER include/spdk/nbd.h 00:02:26.922 TEST_HEADER include/spdk/mmio.h 00:02:26.922 TEST_HEADER include/spdk/notify.h 00:02:26.922 TEST_HEADER include/spdk/nvme.h 00:02:26.922 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:26.922 TEST_HEADER include/spdk/nvme_intel.h 00:02:26.922 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:26.922 TEST_HEADER include/spdk/nvme_spec.h 00:02:26.922 TEST_HEADER include/spdk/nvme_zns.h 00:02:26.922 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:26.922 TEST_HEADER include/spdk/nvmf.h 00:02:26.922 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:26.922 TEST_HEADER include/spdk/nvmf_spec.h 00:02:26.922 TEST_HEADER include/spdk/opal.h 00:02:26.922 TEST_HEADER include/spdk/nvmf_transport.h 00:02:26.922 TEST_HEADER include/spdk/opal_spec.h 00:02:26.922 TEST_HEADER include/spdk/pci_ids.h 00:02:26.922 TEST_HEADER include/spdk/pipe.h 00:02:26.922 CC app/spdk_tgt/spdk_tgt.o 00:02:26.922 TEST_HEADER include/spdk/queue.h 00:02:26.922 TEST_HEADER include/spdk/reduce.h 00:02:26.922 TEST_HEADER include/spdk/rpc.h 00:02:26.922 TEST_HEADER include/spdk/scheduler.h 00:02:26.922 TEST_HEADER include/spdk/scsi.h 00:02:26.922 TEST_HEADER include/spdk/sock.h 00:02:26.922 TEST_HEADER include/spdk/scsi_spec.h 00:02:26.922 TEST_HEADER include/spdk/stdinc.h 00:02:26.922 TEST_HEADER include/spdk/thread.h 00:02:26.922 TEST_HEADER include/spdk/trace.h 00:02:26.922 TEST_HEADER include/spdk/string.h 00:02:26.922 TEST_HEADER include/spdk/trace_parser.h 00:02:26.922 TEST_HEADER include/spdk/ublk.h 00:02:26.922 TEST_HEADER include/spdk/util.h 00:02:26.922 TEST_HEADER include/spdk/tree.h 00:02:26.922 TEST_HEADER include/spdk/uuid.h 00:02:26.922 TEST_HEADER include/spdk/version.h 00:02:26.922 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:26.922 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:26.922 TEST_HEADER include/spdk/vhost.h 00:02:26.922 TEST_HEADER include/spdk/xor.h 00:02:26.922 TEST_HEADER include/spdk/zipf.h 00:02:26.922 TEST_HEADER include/spdk/vmd.h 00:02:26.922 CXX test/cpp_headers/accel_module.o 00:02:26.922 CXX test/cpp_headers/accel.o 00:02:26.922 CXX test/cpp_headers/barrier.o 00:02:26.922 CXX test/cpp_headers/assert.o 00:02:26.922 CXX test/cpp_headers/base64.o 00:02:26.922 CXX test/cpp_headers/bdev.o 00:02:26.922 CXX test/cpp_headers/bdev_zone.o 00:02:26.922 CXX test/cpp_headers/bit_array.o 00:02:26.922 CXX test/cpp_headers/bdev_module.o 00:02:26.922 CXX test/cpp_headers/bit_pool.o 00:02:26.922 CXX test/cpp_headers/blobfs.o 00:02:26.922 CXX test/cpp_headers/blob.o 00:02:26.922 CXX test/cpp_headers/blob_bdev.o 00:02:26.922 CXX test/cpp_headers/blobfs_bdev.o 00:02:26.922 CXX test/cpp_headers/conf.o 00:02:26.922 CXX test/cpp_headers/cpuset.o 00:02:26.922 CXX test/cpp_headers/config.o 00:02:26.922 CXX test/cpp_headers/crc16.o 00:02:26.922 CXX test/cpp_headers/crc32.o 00:02:26.922 CXX test/cpp_headers/crc64.o 00:02:26.922 CXX test/cpp_headers/dif.o 00:02:26.922 CXX test/cpp_headers/dma.o 00:02:26.922 CXX test/cpp_headers/endian.o 00:02:26.922 CXX test/cpp_headers/env_dpdk.o 00:02:26.922 CXX test/cpp_headers/env.o 00:02:26.922 CXX test/cpp_headers/event.o 00:02:26.922 CXX test/cpp_headers/fd.o 00:02:26.922 CXX test/cpp_headers/fd_group.o 00:02:26.922 CXX test/cpp_headers/file.o 00:02:26.922 CXX test/cpp_headers/ftl.o 00:02:26.922 CXX test/cpp_headers/gpt_spec.o 00:02:26.922 CXX test/cpp_headers/histogram_data.o 00:02:26.922 CXX test/cpp_headers/hexlify.o 00:02:26.922 CXX test/cpp_headers/idxd_spec.o 00:02:26.922 CXX test/cpp_headers/idxd.o 00:02:26.922 CXX test/cpp_headers/init.o 00:02:26.922 CXX test/cpp_headers/ioat_spec.o 00:02:26.922 CXX test/cpp_headers/ioat.o 00:02:26.922 CXX test/cpp_headers/json.o 00:02:26.922 CXX test/cpp_headers/iscsi_spec.o 00:02:26.922 CXX test/cpp_headers/jsonrpc.o 00:02:26.922 CXX test/cpp_headers/keyring.o 00:02:26.922 CXX test/cpp_headers/log.o 00:02:26.922 CXX test/cpp_headers/keyring_module.o 00:02:26.922 CXX test/cpp_headers/likely.o 00:02:26.922 CXX test/cpp_headers/lvol.o 00:02:26.922 CXX test/cpp_headers/memory.o 00:02:26.922 CXX test/cpp_headers/nbd.o 00:02:26.922 CXX test/cpp_headers/mmio.o 00:02:26.922 CXX test/cpp_headers/notify.o 00:02:26.922 CXX test/cpp_headers/nvme.o 00:02:26.922 CXX test/cpp_headers/nvme_intel.o 00:02:26.922 CXX test/cpp_headers/nvme_ocssd.o 00:02:26.922 CC test/env/vtophys/vtophys.o 00:02:26.922 CXX test/cpp_headers/nvme_spec.o 00:02:26.922 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:26.922 CC test/env/pci/pci_ut.o 00:02:26.922 CXX test/cpp_headers/nvme_zns.o 00:02:26.922 CXX test/cpp_headers/nvmf_cmd.o 00:02:26.922 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:26.922 CXX test/cpp_headers/nvmf.o 00:02:26.922 CXX test/cpp_headers/opal.o 00:02:26.922 CC test/env/memory/memory_ut.o 00:02:26.922 CXX test/cpp_headers/nvmf_spec.o 00:02:26.922 CXX test/cpp_headers/nvmf_transport.o 00:02:26.922 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:26.922 CXX test/cpp_headers/opal_spec.o 00:02:26.922 CXX test/cpp_headers/pipe.o 00:02:26.922 CXX test/cpp_headers/pci_ids.o 00:02:26.922 CC examples/ioat/perf/perf.o 00:02:26.922 CC examples/ioat/verify/verify.o 00:02:26.922 CXX test/cpp_headers/queue.o 00:02:26.922 CXX test/cpp_headers/reduce.o 00:02:26.922 CC examples/vmd/lsvmd/lsvmd.o 00:02:26.922 CXX test/cpp_headers/rpc.o 00:02:26.922 CXX test/cpp_headers/scheduler.o 00:02:26.922 CC test/app/jsoncat/jsoncat.o 00:02:26.922 CC examples/sock/hello_world/hello_sock.o 00:02:26.922 CC examples/accel/perf/accel_perf.o 00:02:26.922 CC examples/nvme/arbitration/arbitration.o 00:02:26.922 CC test/nvme/e2edp/nvme_dp.o 00:02:26.922 CC test/event/event_perf/event_perf.o 00:02:26.922 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:26.922 CXX test/cpp_headers/scsi.o 00:02:26.922 CC examples/idxd/perf/perf.o 00:02:26.922 CC test/app/stub/stub.o 00:02:26.922 CC examples/vmd/led/led.o 00:02:26.922 CC examples/nvme/reconnect/reconnect.o 00:02:26.922 CC app/fio/nvme/fio_plugin.o 00:02:26.922 CC examples/nvme/hotplug/hotplug.o 00:02:26.922 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:27.190 CC test/event/reactor/reactor.o 00:02:27.190 CC test/app/histogram_perf/histogram_perf.o 00:02:27.190 CC test/nvme/startup/startup.o 00:02:27.190 CC test/nvme/simple_copy/simple_copy.o 00:02:27.190 CC test/nvme/reset/reset.o 00:02:27.190 CC test/nvme/reserve/reserve.o 00:02:27.190 CC test/nvme/overhead/overhead.o 00:02:27.190 CC test/bdev/bdevio/bdevio.o 00:02:27.190 CC examples/util/zipf/zipf.o 00:02:27.190 CC test/nvme/sgl/sgl.o 00:02:27.190 CC test/nvme/aer/aer.o 00:02:27.190 CC examples/nvme/abort/abort.o 00:02:27.190 CC test/nvme/connect_stress/connect_stress.o 00:02:27.191 CC test/thread/poller_perf/poller_perf.o 00:02:27.191 CC test/nvme/compliance/nvme_compliance.o 00:02:27.191 CC examples/bdev/hello_world/hello_bdev.o 00:02:27.191 CC test/event/app_repeat/app_repeat.o 00:02:27.191 CC test/nvme/fused_ordering/fused_ordering.o 00:02:27.191 CC test/accel/dif/dif.o 00:02:27.191 CC test/event/reactor_perf/reactor_perf.o 00:02:27.191 CC examples/nvme/hello_world/hello_world.o 00:02:27.191 CC test/nvme/err_injection/err_injection.o 00:02:27.191 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:27.191 CC examples/bdev/bdevperf/bdevperf.o 00:02:27.191 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:27.191 CC examples/blob/cli/blobcli.o 00:02:27.191 CC test/nvme/boot_partition/boot_partition.o 00:02:27.191 CC test/nvme/cuse/cuse.o 00:02:27.191 CC app/fio/bdev/fio_plugin.o 00:02:27.191 CC examples/blob/hello_world/hello_blob.o 00:02:27.191 CC test/nvme/fdp/fdp.o 00:02:27.191 CC test/app/bdev_svc/bdev_svc.o 00:02:27.191 CXX test/cpp_headers/scsi_spec.o 00:02:27.191 CC test/blobfs/mkfs/mkfs.o 00:02:27.191 CC test/dma/test_dma/test_dma.o 00:02:27.191 LINK spdk_lspci 00:02:27.191 CC examples/nvmf/nvmf/nvmf.o 00:02:27.191 CC examples/thread/thread/thread_ex.o 00:02:27.191 CC test/event/scheduler/scheduler.o 00:02:27.191 LINK spdk_nvme_discover 00:02:27.455 LINK rpc_client_test 00:02:27.455 CC test/env/mem_callbacks/mem_callbacks.o 00:02:27.455 LINK vhost 00:02:27.455 LINK interrupt_tgt 00:02:27.455 LINK nvmf_tgt 00:02:27.455 LINK spdk_trace_record 00:02:27.455 CC test/lvol/esnap/esnap.o 00:02:27.455 LINK iscsi_tgt 00:02:27.455 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:27.714 LINK zipf 00:02:27.714 LINK spdk_tgt 00:02:27.714 LINK lsvmd 00:02:27.714 LINK reactor 00:02:27.714 LINK event_perf 00:02:27.714 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:27.714 LINK histogram_perf 00:02:27.714 LINK stub 00:02:27.714 LINK env_dpdk_post_init 00:02:27.714 LINK jsoncat 00:02:27.714 LINK vtophys 00:02:27.714 LINK poller_perf 00:02:27.714 LINK led 00:02:27.714 LINK app_repeat 00:02:27.714 LINK ioat_perf 00:02:27.714 CXX test/cpp_headers/sock.o 00:02:27.714 LINK startup 00:02:27.714 LINK reactor_perf 00:02:27.714 LINK cmb_copy 00:02:27.714 LINK verify 00:02:27.714 LINK connect_stress 00:02:27.714 CXX test/cpp_headers/stdinc.o 00:02:27.714 LINK boot_partition 00:02:27.714 CXX test/cpp_headers/string.o 00:02:27.714 CXX test/cpp_headers/thread.o 00:02:27.714 CXX test/cpp_headers/trace.o 00:02:27.714 LINK reserve 00:02:27.714 CXX test/cpp_headers/trace_parser.o 00:02:27.714 CXX test/cpp_headers/tree.o 00:02:27.714 CXX test/cpp_headers/ublk.o 00:02:27.714 CXX test/cpp_headers/util.o 00:02:27.714 CXX test/cpp_headers/uuid.o 00:02:27.714 CXX test/cpp_headers/version.o 00:02:27.714 LINK bdev_svc 00:02:27.714 CXX test/cpp_headers/vfio_user_pci.o 00:02:27.714 LINK hello_sock 00:02:27.714 LINK pmr_persistence 00:02:27.714 CXX test/cpp_headers/vfio_user_spec.o 00:02:27.714 CXX test/cpp_headers/vhost.o 00:02:27.714 CXX test/cpp_headers/vmd.o 00:02:27.714 LINK fused_ordering 00:02:27.714 LINK spdk_dd 00:02:27.714 CXX test/cpp_headers/xor.o 00:02:27.714 CXX test/cpp_headers/zipf.o 00:02:27.714 LINK mkfs 00:02:27.714 LINK doorbell_aers 00:02:27.714 LINK err_injection 00:02:27.714 LINK nvme_dp 00:02:27.972 LINK hello_bdev 00:02:27.972 LINK simple_copy 00:02:27.972 LINK hello_world 00:02:27.972 LINK overhead 00:02:27.972 LINK hotplug 00:02:27.972 LINK reset 00:02:27.972 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:27.972 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:27.972 LINK scheduler 00:02:27.972 LINK sgl 00:02:27.972 LINK reconnect 00:02:27.972 LINK thread 00:02:27.972 LINK arbitration 00:02:27.972 LINK hello_blob 00:02:27.972 LINK aer 00:02:27.972 LINK nvme_compliance 00:02:27.972 LINK spdk_trace 00:02:27.972 LINK idxd_perf 00:02:27.972 LINK fdp 00:02:27.972 LINK nvmf 00:02:27.972 LINK pci_ut 00:02:27.972 LINK abort 00:02:27.972 LINK bdevio 00:02:27.972 LINK dif 00:02:27.972 LINK test_dma 00:02:27.972 LINK blobcli 00:02:27.972 LINK nvme_manage 00:02:27.972 LINK spdk_nvme 00:02:28.232 LINK accel_perf 00:02:28.232 LINK spdk_bdev 00:02:28.232 LINK nvme_fuzz 00:02:28.232 LINK bdevperf 00:02:28.232 LINK spdk_nvme_perf 00:02:28.232 LINK spdk_nvme_identify 00:02:28.232 LINK vhost_fuzz 00:02:28.232 LINK spdk_top 00:02:28.493 LINK mem_callbacks 00:02:28.493 LINK memory_ut 00:02:28.753 LINK cuse 00:02:29.325 LINK iscsi_fuzz 00:02:31.869 LINK esnap 00:02:31.869 00:02:31.869 real 0m49.937s 00:02:31.869 user 6m31.735s 00:02:31.869 sys 4m36.143s 00:02:31.869 14:39:14 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:02:31.869 14:39:14 -- common/autotest_common.sh@10 -- $ set +x 00:02:31.869 ************************************ 00:02:31.869 END TEST make 00:02:31.869 ************************************ 00:02:32.130 14:39:14 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:32.130 14:39:14 -- pm/common@30 -- $ signal_monitor_resources TERM 00:02:32.130 14:39:14 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:02:32.130 14:39:14 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.130 14:39:14 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:32.130 14:39:14 -- pm/common@45 -- $ pid=736223 00:02:32.130 14:39:14 -- pm/common@52 -- $ sudo kill -TERM 736223 00:02:32.130 14:39:14 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.130 14:39:14 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:32.130 14:39:14 -- pm/common@45 -- $ pid=736224 00:02:32.130 14:39:14 -- pm/common@52 -- $ sudo kill -TERM 736224 00:02:32.130 14:39:14 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.130 14:39:14 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:32.130 14:39:14 -- pm/common@45 -- $ pid=736228 00:02:32.130 14:39:14 -- pm/common@52 -- $ sudo kill -TERM 736228 00:02:32.130 14:39:14 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.130 14:39:14 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:32.130 14:39:14 -- pm/common@45 -- $ pid=736225 00:02:32.130 14:39:14 -- pm/common@52 -- $ sudo kill -TERM 736225 00:02:32.130 14:39:14 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:32.391 14:39:14 -- nvmf/common.sh@7 -- # uname -s 00:02:32.391 14:39:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:32.391 14:39:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:32.391 14:39:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:32.391 14:39:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:32.391 14:39:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:32.391 14:39:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:32.391 14:39:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:32.391 14:39:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:32.391 14:39:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:32.391 14:39:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:32.391 14:39:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:32.391 14:39:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:32.391 14:39:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:32.391 14:39:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:32.391 14:39:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:32.391 14:39:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:32.391 14:39:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:32.391 14:39:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:32.391 14:39:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:32.391 14:39:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:32.391 14:39:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.391 14:39:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.391 14:39:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.392 14:39:14 -- paths/export.sh@5 -- # export PATH 00:02:32.392 14:39:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.392 14:39:14 -- nvmf/common.sh@47 -- # : 0 00:02:32.392 14:39:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:32.392 14:39:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:32.392 14:39:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:32.392 14:39:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:32.392 14:39:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:32.392 14:39:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:32.392 14:39:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:32.392 14:39:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:32.392 14:39:14 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:32.392 14:39:14 -- spdk/autotest.sh@32 -- # uname -s 00:02:32.392 14:39:14 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:32.392 14:39:14 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:32.392 14:39:14 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:32.392 14:39:14 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:32.392 14:39:14 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:32.392 14:39:14 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:32.392 14:39:14 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:32.392 14:39:14 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:32.392 14:39:14 -- spdk/autotest.sh@48 -- # udevadm_pid=798962 00:02:32.392 14:39:14 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:32.392 14:39:14 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:32.392 14:39:14 -- pm/common@17 -- # local monitor 00:02:32.392 14:39:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.392 14:39:14 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=798965 00:02:32.392 14:39:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.392 14:39:14 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=798967 00:02:32.392 14:39:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.392 14:39:14 -- pm/common@21 -- # date +%s 00:02:32.392 14:39:14 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=798969 00:02:32.392 14:39:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.392 14:39:14 -- pm/common@21 -- # date +%s 00:02:32.392 14:39:14 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=798972 00:02:32.392 14:39:14 -- pm/common@26 -- # sleep 1 00:02:32.392 14:39:14 -- pm/common@21 -- # date +%s 00:02:32.392 14:39:14 -- pm/common@21 -- # date +%s 00:02:32.392 14:39:14 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714135154 00:02:32.392 14:39:14 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714135154 00:02:32.392 14:39:14 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714135154 00:02:32.392 14:39:14 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714135154 00:02:32.392 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714135154_collect-vmstat.pm.log 00:02:32.392 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714135154_collect-bmc-pm.bmc.pm.log 00:02:32.392 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714135154_collect-cpu-load.pm.log 00:02:32.392 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714135154_collect-cpu-temp.pm.log 00:02:33.333 14:39:15 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:33.333 14:39:15 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:33.333 14:39:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:33.333 14:39:15 -- common/autotest_common.sh@10 -- # set +x 00:02:33.333 14:39:15 -- spdk/autotest.sh@59 -- # create_test_list 00:02:33.333 14:39:15 -- common/autotest_common.sh@734 -- # xtrace_disable 00:02:33.333 14:39:15 -- common/autotest_common.sh@10 -- # set +x 00:02:33.333 14:39:15 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:33.333 14:39:15 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:33.333 14:39:15 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:33.333 14:39:15 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:33.333 14:39:15 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:33.333 14:39:15 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:33.333 14:39:15 -- common/autotest_common.sh@1441 -- # uname 00:02:33.333 14:39:15 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:02:33.333 14:39:15 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:33.333 14:39:15 -- common/autotest_common.sh@1461 -- # uname 00:02:33.333 14:39:15 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:02:33.333 14:39:15 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:33.333 14:39:15 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:33.333 14:39:15 -- spdk/autotest.sh@72 -- # hash lcov 00:02:33.333 14:39:15 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:33.333 14:39:15 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:33.333 --rc lcov_branch_coverage=1 00:02:33.333 --rc lcov_function_coverage=1 00:02:33.333 --rc genhtml_branch_coverage=1 00:02:33.333 --rc genhtml_function_coverage=1 00:02:33.333 --rc genhtml_legend=1 00:02:33.333 --rc geninfo_all_blocks=1 00:02:33.333 ' 00:02:33.333 14:39:15 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:33.333 --rc lcov_branch_coverage=1 00:02:33.333 --rc lcov_function_coverage=1 00:02:33.333 --rc genhtml_branch_coverage=1 00:02:33.333 --rc genhtml_function_coverage=1 00:02:33.333 --rc genhtml_legend=1 00:02:33.333 --rc geninfo_all_blocks=1 00:02:33.333 ' 00:02:33.333 14:39:15 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:33.333 --rc lcov_branch_coverage=1 00:02:33.333 --rc lcov_function_coverage=1 00:02:33.333 --rc genhtml_branch_coverage=1 00:02:33.333 --rc genhtml_function_coverage=1 00:02:33.333 --rc genhtml_legend=1 00:02:33.333 --rc geninfo_all_blocks=1 00:02:33.333 --no-external' 00:02:33.333 14:39:15 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:33.333 --rc lcov_branch_coverage=1 00:02:33.333 --rc lcov_function_coverage=1 00:02:33.333 --rc genhtml_branch_coverage=1 00:02:33.333 --rc genhtml_function_coverage=1 00:02:33.333 --rc genhtml_legend=1 00:02:33.333 --rc geninfo_all_blocks=1 00:02:33.333 --no-external' 00:02:33.333 14:39:15 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:33.594 lcov: LCOV version 1.14 00:02:33.594 14:39:16 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:41.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:41.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:41.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:41.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:41.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:41.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:41.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:41.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:41.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:41.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:41.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:41.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:41.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:41.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:41.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:41.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:41.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:41.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:41.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:41.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:41.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:41.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:41.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:41.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:41.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:41.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:41.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:41.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:41.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:41.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:41.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:41.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:41.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:41.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:41.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:41.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:41.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:41.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:41.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:41.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:41.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:41.727 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:41.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:41.728 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:41.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:41.729 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:41.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:41.729 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:41.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:41.729 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:41.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:41.729 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:41.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:41.729 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:41.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:41.729 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:41.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:41.729 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:41.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:41.729 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:41.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:41.729 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:41.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:41.729 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:41.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:41.729 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:41.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:41.729 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:41.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:41.729 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:41.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:41.729 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:41.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:41.729 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:45.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:45.024 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:55.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:55.022 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:55.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:55.022 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:55.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:55.022 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:03.160 14:39:44 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:03.160 14:39:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:03.160 14:39:44 -- common/autotest_common.sh@10 -- # set +x 00:03:03.160 14:39:44 -- spdk/autotest.sh@91 -- # rm -f 00:03:03.160 14:39:44 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.074 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:05.074 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:05.074 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:05.074 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:05.074 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:05.074 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:05.074 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:05.074 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:05.074 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:05.074 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:05.074 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:05.074 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:05.074 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:05.074 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:05.074 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:05.074 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:05.074 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:05.335 14:39:47 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:05.335 14:39:47 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:05.335 14:39:47 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:05.335 14:39:47 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:05.335 14:39:47 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:05.335 14:39:47 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:05.335 14:39:47 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:05.335 14:39:47 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:05.335 14:39:47 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:05.335 14:39:47 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:05.335 14:39:47 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:05.335 14:39:47 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:05.335 14:39:47 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:05.335 14:39:47 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:05.335 14:39:47 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:05.335 No valid GPT data, bailing 00:03:05.335 14:39:47 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:05.335 14:39:47 -- scripts/common.sh@391 -- # pt= 00:03:05.335 14:39:47 -- scripts/common.sh@392 -- # return 1 00:03:05.335 14:39:47 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:05.335 1+0 records in 00:03:05.335 1+0 records out 00:03:05.335 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00427379 s, 245 MB/s 00:03:05.335 14:39:47 -- spdk/autotest.sh@118 -- # sync 00:03:05.335 14:39:47 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:05.335 14:39:47 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:05.335 14:39:47 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:13.737 14:39:56 -- spdk/autotest.sh@124 -- # uname -s 00:03:13.737 14:39:56 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:13.737 14:39:56 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:13.737 14:39:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:13.737 14:39:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:13.737 14:39:56 -- common/autotest_common.sh@10 -- # set +x 00:03:13.737 ************************************ 00:03:13.737 START TEST setup.sh 00:03:13.737 ************************************ 00:03:13.737 14:39:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:13.737 * Looking for test storage... 00:03:13.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:13.737 14:39:56 -- setup/test-setup.sh@10 -- # uname -s 00:03:13.737 14:39:56 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:13.737 14:39:56 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:13.737 14:39:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:13.737 14:39:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:13.737 14:39:56 -- common/autotest_common.sh@10 -- # set +x 00:03:13.998 ************************************ 00:03:13.998 START TEST acl 00:03:13.998 ************************************ 00:03:13.998 14:39:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:13.998 * Looking for test storage... 00:03:13.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:13.998 14:39:56 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:13.998 14:39:56 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:13.998 14:39:56 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:13.998 14:39:56 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:13.998 14:39:56 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:13.998 14:39:56 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:13.999 14:39:56 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:13.999 14:39:56 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:13.999 14:39:56 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:13.999 14:39:56 -- setup/acl.sh@12 -- # devs=() 00:03:13.999 14:39:56 -- setup/acl.sh@12 -- # declare -a devs 00:03:13.999 14:39:56 -- setup/acl.sh@13 -- # drivers=() 00:03:13.999 14:39:56 -- setup/acl.sh@13 -- # declare -A drivers 00:03:13.999 14:39:56 -- setup/acl.sh@51 -- # setup reset 00:03:13.999 14:39:56 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:13.999 14:39:56 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:18.206 14:40:00 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:18.206 14:40:00 -- setup/acl.sh@16 -- # local dev driver 00:03:18.206 14:40:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.206 14:40:00 -- setup/acl.sh@15 -- # setup output status 00:03:18.206 14:40:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.206 14:40:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:21.507 Hugepages 00:03:21.507 node hugesize free / total 00:03:21.507 14:40:03 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:21.507 14:40:03 -- setup/acl.sh@19 -- # continue 00:03:21.507 14:40:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.507 14:40:03 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:21.507 14:40:03 -- setup/acl.sh@19 -- # continue 00:03:21.507 14:40:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.507 14:40:03 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:21.507 14:40:03 -- setup/acl.sh@19 -- # continue 00:03:21.507 14:40:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.507 00:03:21.507 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:21.507 14:40:03 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:21.507 14:40:03 -- setup/acl.sh@19 -- # continue 00:03:21.507 14:40:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.507 14:40:03 -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # continue 00:03:21.508 14:40:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.508 14:40:03 -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # continue 00:03:21.508 14:40:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.508 14:40:03 -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # continue 00:03:21.508 14:40:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.508 14:40:03 -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # continue 00:03:21.508 14:40:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.508 14:40:03 -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # continue 00:03:21.508 14:40:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.508 14:40:03 -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # continue 00:03:21.508 14:40:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.508 14:40:03 -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # continue 00:03:21.508 14:40:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.508 14:40:03 -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # continue 00:03:21.508 14:40:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.508 14:40:03 -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:21.508 14:40:03 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:21.508 14:40:03 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:21.508 14:40:03 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:21.508 14:40:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.508 14:40:03 -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # continue 00:03:21.508 14:40:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.508 14:40:03 -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # continue 00:03:21.508 14:40:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.508 14:40:03 -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # continue 00:03:21.508 14:40:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.508 14:40:03 -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # continue 00:03:21.508 14:40:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.508 14:40:03 -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # continue 00:03:21.508 14:40:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.508 14:40:03 -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # continue 00:03:21.508 14:40:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.508 14:40:03 -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # continue 00:03:21.508 14:40:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.508 14:40:03 -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.508 14:40:03 -- setup/acl.sh@20 -- # continue 00:03:21.508 14:40:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.508 14:40:03 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:21.508 14:40:03 -- setup/acl.sh@54 -- # run_test denied denied 00:03:21.508 14:40:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:21.508 14:40:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:21.508 14:40:03 -- common/autotest_common.sh@10 -- # set +x 00:03:21.508 ************************************ 00:03:21.508 START TEST denied 00:03:21.508 ************************************ 00:03:21.508 14:40:03 -- common/autotest_common.sh@1111 -- # denied 00:03:21.508 14:40:03 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:21.508 14:40:03 -- setup/acl.sh@38 -- # setup output config 00:03:21.508 14:40:03 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:21.508 14:40:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.508 14:40:03 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:25.717 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:25.717 14:40:07 -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:25.717 14:40:07 -- setup/acl.sh@28 -- # local dev driver 00:03:25.717 14:40:07 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:25.717 14:40:07 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:25.717 14:40:07 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:25.717 14:40:07 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:25.717 14:40:07 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:25.717 14:40:07 -- setup/acl.sh@41 -- # setup reset 00:03:25.717 14:40:07 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:25.717 14:40:07 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.012 00:03:31.012 real 0m8.852s 00:03:31.012 user 0m2.968s 00:03:31.012 sys 0m5.145s 00:03:31.012 14:40:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:31.012 14:40:12 -- common/autotest_common.sh@10 -- # set +x 00:03:31.012 ************************************ 00:03:31.012 END TEST denied 00:03:31.012 ************************************ 00:03:31.012 14:40:12 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:31.012 14:40:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:31.012 14:40:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:31.012 14:40:12 -- common/autotest_common.sh@10 -- # set +x 00:03:31.012 ************************************ 00:03:31.012 START TEST allowed 00:03:31.012 ************************************ 00:03:31.012 14:40:12 -- common/autotest_common.sh@1111 -- # allowed 00:03:31.012 14:40:12 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:31.012 14:40:12 -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:31.012 14:40:12 -- setup/acl.sh@45 -- # setup output config 00:03:31.012 14:40:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.012 14:40:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:36.305 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:36.305 14:40:18 -- setup/acl.sh@47 -- # verify 00:03:36.305 14:40:18 -- setup/acl.sh@28 -- # local dev driver 00:03:36.305 14:40:18 -- setup/acl.sh@48 -- # setup reset 00:03:36.305 14:40:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.305 14:40:18 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.608 00:03:39.608 real 0m9.336s 00:03:39.608 user 0m2.512s 00:03:39.608 sys 0m5.017s 00:03:39.608 14:40:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:39.608 14:40:22 -- common/autotest_common.sh@10 -- # set +x 00:03:39.608 ************************************ 00:03:39.608 END TEST allowed 00:03:39.608 ************************************ 00:03:39.608 00:03:39.608 real 0m25.818s 00:03:39.608 user 0m8.201s 00:03:39.608 sys 0m15.152s 00:03:39.608 14:40:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:39.608 14:40:22 -- common/autotest_common.sh@10 -- # set +x 00:03:39.608 ************************************ 00:03:39.608 END TEST acl 00:03:39.608 ************************************ 00:03:39.868 14:40:22 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:39.868 14:40:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:39.868 14:40:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:39.868 14:40:22 -- common/autotest_common.sh@10 -- # set +x 00:03:39.868 ************************************ 00:03:39.868 START TEST hugepages 00:03:39.868 ************************************ 00:03:39.868 14:40:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:39.868 * Looking for test storage... 00:03:40.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:40.131 14:40:22 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:40.131 14:40:22 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:40.131 14:40:22 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:40.131 14:40:22 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:40.131 14:40:22 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:40.131 14:40:22 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:40.131 14:40:22 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:40.131 14:40:22 -- setup/common.sh@18 -- # local node= 00:03:40.131 14:40:22 -- setup/common.sh@19 -- # local var val 00:03:40.131 14:40:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.131 14:40:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.131 14:40:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.131 14:40:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.131 14:40:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.131 14:40:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:40:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 107192140 kB' 'MemAvailable: 110721528 kB' 'Buffers: 4124 kB' 'Cached: 10404012 kB' 'SwapCached: 0 kB' 'Active: 7494508 kB' 'Inactive: 3515716 kB' 'Active(anon): 6804112 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 605476 kB' 'Mapped: 172604 kB' 'Shmem: 6202024 kB' 'KReclaimable: 295216 kB' 'Slab: 1066816 kB' 'SReclaimable: 295216 kB' 'SUnreclaim: 771600 kB' 'KernelStack: 26976 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460884 kB' 'Committed_AS: 8191584 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234620 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.131 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.131 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # continue 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.132 14:40:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.132 14:40:22 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.132 14:40:22 -- setup/common.sh@33 -- # echo 2048 00:03:40.132 14:40:22 -- setup/common.sh@33 -- # return 0 00:03:40.132 14:40:22 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:40.132 14:40:22 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:40.132 14:40:22 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:40.132 14:40:22 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:40.132 14:40:22 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:40.132 14:40:22 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:40.132 14:40:22 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:40.132 14:40:22 -- setup/hugepages.sh@207 -- # get_nodes 00:03:40.132 14:40:22 -- setup/hugepages.sh@27 -- # local node 00:03:40.132 14:40:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.132 14:40:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:40.132 14:40:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.132 14:40:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:40.132 14:40:22 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:40.132 14:40:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.132 14:40:22 -- setup/hugepages.sh@208 -- # clear_hp 00:03:40.132 14:40:22 -- setup/hugepages.sh@37 -- # local node hp 00:03:40.132 14:40:22 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:40.132 14:40:22 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.132 14:40:22 -- setup/hugepages.sh@41 -- # echo 0 00:03:40.132 14:40:22 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.132 14:40:22 -- setup/hugepages.sh@41 -- # echo 0 00:03:40.132 14:40:22 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:40.132 14:40:22 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.132 14:40:22 -- setup/hugepages.sh@41 -- # echo 0 00:03:40.132 14:40:22 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.132 14:40:22 -- setup/hugepages.sh@41 -- # echo 0 00:03:40.132 14:40:22 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:40.132 14:40:22 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:40.132 14:40:22 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:40.132 14:40:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:40.132 14:40:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:40.132 14:40:22 -- common/autotest_common.sh@10 -- # set +x 00:03:40.132 ************************************ 00:03:40.132 START TEST default_setup 00:03:40.133 ************************************ 00:03:40.133 14:40:22 -- common/autotest_common.sh@1111 -- # default_setup 00:03:40.133 14:40:22 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:40.133 14:40:22 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:40.133 14:40:22 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:40.133 14:40:22 -- setup/hugepages.sh@51 -- # shift 00:03:40.133 14:40:22 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:40.133 14:40:22 -- setup/hugepages.sh@52 -- # local node_ids 00:03:40.133 14:40:22 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:40.133 14:40:22 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:40.133 14:40:22 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:40.133 14:40:22 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:40.133 14:40:22 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:40.133 14:40:22 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:40.133 14:40:22 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:40.133 14:40:22 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:40.133 14:40:22 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:40.133 14:40:22 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:40.133 14:40:22 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:40.133 14:40:22 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:40.133 14:40:22 -- setup/hugepages.sh@73 -- # return 0 00:03:40.133 14:40:22 -- setup/hugepages.sh@137 -- # setup output 00:03:40.133 14:40:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.133 14:40:22 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:43.437 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:43.697 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:43.697 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:43.697 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:43.697 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:43.697 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:43.697 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:43.697 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:43.697 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:43.697 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:43.697 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:43.697 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:43.697 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:43.697 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:43.697 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:43.697 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:43.697 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:43.958 14:40:26 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:43.958 14:40:26 -- setup/hugepages.sh@89 -- # local node 00:03:43.958 14:40:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:43.958 14:40:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:43.958 14:40:26 -- setup/hugepages.sh@92 -- # local surp 00:03:43.958 14:40:26 -- setup/hugepages.sh@93 -- # local resv 00:03:43.958 14:40:26 -- setup/hugepages.sh@94 -- # local anon 00:03:43.958 14:40:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.958 14:40:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:43.958 14:40:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.958 14:40:26 -- setup/common.sh@18 -- # local node= 00:03:43.958 14:40:26 -- setup/common.sh@19 -- # local var val 00:03:43.958 14:40:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.958 14:40:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.958 14:40:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.958 14:40:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.958 14:40:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.958 14:40:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.958 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.958 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.959 14:40:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109336220 kB' 'MemAvailable: 112865272 kB' 'Buffers: 4124 kB' 'Cached: 10404132 kB' 'SwapCached: 0 kB' 'Active: 7513196 kB' 'Inactive: 3515716 kB' 'Active(anon): 6822800 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624008 kB' 'Mapped: 172988 kB' 'Shmem: 6202144 kB' 'KReclaimable: 294544 kB' 'Slab: 1063908 kB' 'SReclaimable: 294544 kB' 'SUnreclaim: 769364 kB' 'KernelStack: 26992 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8225324 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234572 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:03:43.959 14:40:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.959 14:40:26 -- setup/common.sh@32 -- # continue 00:03:43.959 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.959 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.959 14:40:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.959 14:40:26 -- setup/common.sh@32 -- # continue 00:03:43.959 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.959 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.959 14:40:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.959 14:40:26 -- setup/common.sh@32 -- # continue 00:03:43.959 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.959 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.959 14:40:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.959 14:40:26 -- setup/common.sh@32 -- # continue 00:03:43.959 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.959 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.959 14:40:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.959 14:40:26 -- setup/common.sh@32 -- # continue 00:03:43.959 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.959 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.959 14:40:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.959 14:40:26 -- setup/common.sh@32 -- # continue 00:03:43.959 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.959 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.959 14:40:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.959 14:40:26 -- setup/common.sh@32 -- # continue 00:03:43.959 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.223 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.223 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.224 14:40:26 -- setup/common.sh@33 -- # echo 0 00:03:44.224 14:40:26 -- setup/common.sh@33 -- # return 0 00:03:44.224 14:40:26 -- setup/hugepages.sh@97 -- # anon=0 00:03:44.224 14:40:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.224 14:40:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.224 14:40:26 -- setup/common.sh@18 -- # local node= 00:03:44.224 14:40:26 -- setup/common.sh@19 -- # local var val 00:03:44.224 14:40:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.224 14:40:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.224 14:40:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.224 14:40:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.224 14:40:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.224 14:40:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109336008 kB' 'MemAvailable: 112865060 kB' 'Buffers: 4124 kB' 'Cached: 10404136 kB' 'SwapCached: 0 kB' 'Active: 7511728 kB' 'Inactive: 3515716 kB' 'Active(anon): 6821332 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622652 kB' 'Mapped: 172972 kB' 'Shmem: 6202148 kB' 'KReclaimable: 294544 kB' 'Slab: 1063848 kB' 'SReclaimable: 294544 kB' 'SUnreclaim: 769304 kB' 'KernelStack: 26976 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8208220 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234492 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.224 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.224 14:40:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.225 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.225 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.226 14:40:26 -- setup/common.sh@33 -- # echo 0 00:03:44.226 14:40:26 -- setup/common.sh@33 -- # return 0 00:03:44.226 14:40:26 -- setup/hugepages.sh@99 -- # surp=0 00:03:44.226 14:40:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.226 14:40:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.226 14:40:26 -- setup/common.sh@18 -- # local node= 00:03:44.226 14:40:26 -- setup/common.sh@19 -- # local var val 00:03:44.226 14:40:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.226 14:40:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.226 14:40:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.226 14:40:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.226 14:40:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.226 14:40:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109336876 kB' 'MemAvailable: 112865928 kB' 'Buffers: 4124 kB' 'Cached: 10404148 kB' 'SwapCached: 0 kB' 'Active: 7511464 kB' 'Inactive: 3515716 kB' 'Active(anon): 6821068 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622412 kB' 'Mapped: 172972 kB' 'Shmem: 6202160 kB' 'KReclaimable: 294544 kB' 'Slab: 1063956 kB' 'SReclaimable: 294544 kB' 'SUnreclaim: 769412 kB' 'KernelStack: 26960 kB' 'PageTables: 8448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8208240 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234492 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.226 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.226 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.227 14:40:26 -- setup/common.sh@33 -- # echo 0 00:03:44.227 14:40:26 -- setup/common.sh@33 -- # return 0 00:03:44.227 14:40:26 -- setup/hugepages.sh@100 -- # resv=0 00:03:44.227 14:40:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.227 nr_hugepages=1024 00:03:44.227 14:40:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.227 resv_hugepages=0 00:03:44.227 14:40:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.227 surplus_hugepages=0 00:03:44.227 14:40:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.227 anon_hugepages=0 00:03:44.227 14:40:26 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.227 14:40:26 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.227 14:40:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.227 14:40:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.227 14:40:26 -- setup/common.sh@18 -- # local node= 00:03:44.227 14:40:26 -- setup/common.sh@19 -- # local var val 00:03:44.227 14:40:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.227 14:40:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.227 14:40:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.227 14:40:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.227 14:40:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.227 14:40:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109338268 kB' 'MemAvailable: 112867320 kB' 'Buffers: 4124 kB' 'Cached: 10404148 kB' 'SwapCached: 0 kB' 'Active: 7511660 kB' 'Inactive: 3515716 kB' 'Active(anon): 6821264 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622592 kB' 'Mapped: 172972 kB' 'Shmem: 6202160 kB' 'KReclaimable: 294544 kB' 'Slab: 1063956 kB' 'SReclaimable: 294544 kB' 'SUnreclaim: 769412 kB' 'KernelStack: 26960 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8209520 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234476 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.227 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.227 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.228 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.228 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.229 14:40:26 -- setup/common.sh@33 -- # echo 1024 00:03:44.229 14:40:26 -- setup/common.sh@33 -- # return 0 00:03:44.229 14:40:26 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.229 14:40:26 -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.229 14:40:26 -- setup/hugepages.sh@27 -- # local node 00:03:44.229 14:40:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.229 14:40:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:44.229 14:40:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.229 14:40:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:44.229 14:40:26 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:44.229 14:40:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.229 14:40:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.229 14:40:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.229 14:40:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.229 14:40:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.229 14:40:26 -- setup/common.sh@18 -- # local node=0 00:03:44.229 14:40:26 -- setup/common.sh@19 -- # local var val 00:03:44.229 14:40:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.229 14:40:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.229 14:40:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.229 14:40:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.229 14:40:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.229 14:40:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58816980 kB' 'MemUsed: 6842028 kB' 'SwapCached: 0 kB' 'Active: 2526532 kB' 'Inactive: 106348 kB' 'Active(anon): 2217012 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 106348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2479240 kB' 'Mapped: 97592 kB' 'AnonPages: 156788 kB' 'Shmem: 2063372 kB' 'KernelStack: 12504 kB' 'PageTables: 3664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159296 kB' 'Slab: 530652 kB' 'SReclaimable: 159296 kB' 'SUnreclaim: 371356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.229 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.229 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # continue 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.230 14:40:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.230 14:40:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.230 14:40:26 -- setup/common.sh@33 -- # echo 0 00:03:44.230 14:40:26 -- setup/common.sh@33 -- # return 0 00:03:44.230 14:40:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.230 14:40:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.230 14:40:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.230 14:40:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.230 14:40:26 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:44.230 node0=1024 expecting 1024 00:03:44.230 14:40:26 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:44.230 00:03:44.230 real 0m4.008s 00:03:44.230 user 0m1.612s 00:03:44.230 sys 0m2.416s 00:03:44.230 14:40:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:44.230 14:40:26 -- common/autotest_common.sh@10 -- # set +x 00:03:44.230 ************************************ 00:03:44.230 END TEST default_setup 00:03:44.230 ************************************ 00:03:44.230 14:40:26 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:44.230 14:40:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:44.230 14:40:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:44.230 14:40:26 -- common/autotest_common.sh@10 -- # set +x 00:03:44.491 ************************************ 00:03:44.491 START TEST per_node_1G_alloc 00:03:44.491 ************************************ 00:03:44.492 14:40:26 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:03:44.492 14:40:26 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:44.492 14:40:26 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:44.492 14:40:26 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:44.492 14:40:26 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:44.492 14:40:26 -- setup/hugepages.sh@51 -- # shift 00:03:44.492 14:40:26 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:44.492 14:40:26 -- setup/hugepages.sh@52 -- # local node_ids 00:03:44.492 14:40:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.492 14:40:26 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:44.492 14:40:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:44.492 14:40:26 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:44.492 14:40:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.492 14:40:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:44.492 14:40:26 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:44.492 14:40:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.492 14:40:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.492 14:40:26 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:44.492 14:40:26 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:44.492 14:40:26 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:44.492 14:40:26 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:44.492 14:40:26 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:44.492 14:40:26 -- setup/hugepages.sh@73 -- # return 0 00:03:44.492 14:40:26 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:44.492 14:40:26 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:44.492 14:40:26 -- setup/hugepages.sh@146 -- # setup output 00:03:44.492 14:40:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.492 14:40:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:47.795 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:47.795 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:47.795 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:47.795 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:47.795 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:47.795 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:47.795 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:47.795 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:47.795 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:47.795 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:47.795 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:47.795 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:47.795 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:47.795 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:47.795 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:47.795 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:47.795 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:48.058 14:40:30 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:48.058 14:40:30 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:48.058 14:40:30 -- setup/hugepages.sh@89 -- # local node 00:03:48.058 14:40:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.058 14:40:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.058 14:40:30 -- setup/hugepages.sh@92 -- # local surp 00:03:48.058 14:40:30 -- setup/hugepages.sh@93 -- # local resv 00:03:48.058 14:40:30 -- setup/hugepages.sh@94 -- # local anon 00:03:48.058 14:40:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.058 14:40:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.058 14:40:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.058 14:40:30 -- setup/common.sh@18 -- # local node= 00:03:48.058 14:40:30 -- setup/common.sh@19 -- # local var val 00:03:48.058 14:40:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.058 14:40:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.058 14:40:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.058 14:40:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.058 14:40:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.058 14:40:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.058 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.058 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.058 14:40:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109367452 kB' 'MemAvailable: 112896504 kB' 'Buffers: 4124 kB' 'Cached: 10404276 kB' 'SwapCached: 0 kB' 'Active: 7512780 kB' 'Inactive: 3515716 kB' 'Active(anon): 6822384 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622848 kB' 'Mapped: 171972 kB' 'Shmem: 6202288 kB' 'KReclaimable: 294544 kB' 'Slab: 1064348 kB' 'SReclaimable: 294544 kB' 'SUnreclaim: 769804 kB' 'KernelStack: 27008 kB' 'PageTables: 8472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8201420 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234748 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:03:48.058 14:40:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.058 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.058 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.058 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.058 14:40:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.058 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.058 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.058 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.058 14:40:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.058 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.058 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.058 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.059 14:40:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.059 14:40:30 -- setup/common.sh@33 -- # echo 0 00:03:48.059 14:40:30 -- setup/common.sh@33 -- # return 0 00:03:48.059 14:40:30 -- setup/hugepages.sh@97 -- # anon=0 00:03:48.059 14:40:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.059 14:40:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.059 14:40:30 -- setup/common.sh@18 -- # local node= 00:03:48.059 14:40:30 -- setup/common.sh@19 -- # local var val 00:03:48.059 14:40:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.059 14:40:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.059 14:40:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.059 14:40:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.059 14:40:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.059 14:40:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.059 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109371356 kB' 'MemAvailable: 112900408 kB' 'Buffers: 4124 kB' 'Cached: 10404284 kB' 'SwapCached: 0 kB' 'Active: 7512580 kB' 'Inactive: 3515716 kB' 'Active(anon): 6822184 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622728 kB' 'Mapped: 171940 kB' 'Shmem: 6202296 kB' 'KReclaimable: 294544 kB' 'Slab: 1064356 kB' 'SReclaimable: 294544 kB' 'SUnreclaim: 769812 kB' 'KernelStack: 27040 kB' 'PageTables: 8656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8201432 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234716 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.060 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.060 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.061 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.061 14:40:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.061 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.061 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.061 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.061 14:40:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.061 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.061 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.061 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.061 14:40:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.061 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.061 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.061 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.061 14:40:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.061 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.061 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.061 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.061 14:40:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.061 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.061 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.061 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.061 14:40:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.061 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.061 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.061 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.061 14:40:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.061 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.061 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.061 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.061 14:40:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.061 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.061 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.061 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.061 14:40:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.061 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.061 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.061 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.061 14:40:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.061 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.061 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.061 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.061 14:40:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.061 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.061 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.061 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.061 14:40:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.061 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.061 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.061 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.061 14:40:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.061 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.061 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.061 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.061 14:40:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.061 14:40:30 -- setup/common.sh@33 -- # echo 0 00:03:48.061 14:40:30 -- setup/common.sh@33 -- # return 0 00:03:48.061 14:40:30 -- setup/hugepages.sh@99 -- # surp=0 00:03:48.061 14:40:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.061 14:40:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.061 14:40:30 -- setup/common.sh@18 -- # local node= 00:03:48.061 14:40:30 -- setup/common.sh@19 -- # local var val 00:03:48.061 14:40:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.061 14:40:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.061 14:40:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.061 14:40:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.061 14:40:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.061 14:40:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.324 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.324 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.324 14:40:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109371428 kB' 'MemAvailable: 112900480 kB' 'Buffers: 4124 kB' 'Cached: 10404292 kB' 'SwapCached: 0 kB' 'Active: 7511696 kB' 'Inactive: 3515716 kB' 'Active(anon): 6821300 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622328 kB' 'Mapped: 171860 kB' 'Shmem: 6202304 kB' 'KReclaimable: 294544 kB' 'Slab: 1064348 kB' 'SReclaimable: 294544 kB' 'SUnreclaim: 769804 kB' 'KernelStack: 27152 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8201444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234716 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:03:48.324 14:40:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.324 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.324 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.324 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.324 14:40:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.324 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.324 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.324 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.324 14:40:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.325 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.325 14:40:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.326 14:40:30 -- setup/common.sh@33 -- # echo 0 00:03:48.326 14:40:30 -- setup/common.sh@33 -- # return 0 00:03:48.326 14:40:30 -- setup/hugepages.sh@100 -- # resv=0 00:03:48.326 14:40:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:48.326 nr_hugepages=1024 00:03:48.326 14:40:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.326 resv_hugepages=0 00:03:48.326 14:40:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.326 surplus_hugepages=0 00:03:48.326 14:40:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.326 anon_hugepages=0 00:03:48.326 14:40:30 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.326 14:40:30 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:48.326 14:40:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.326 14:40:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.326 14:40:30 -- setup/common.sh@18 -- # local node= 00:03:48.326 14:40:30 -- setup/common.sh@19 -- # local var val 00:03:48.326 14:40:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.326 14:40:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.326 14:40:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.326 14:40:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.326 14:40:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.326 14:40:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109370988 kB' 'MemAvailable: 112900040 kB' 'Buffers: 4124 kB' 'Cached: 10404308 kB' 'SwapCached: 0 kB' 'Active: 7510940 kB' 'Inactive: 3515716 kB' 'Active(anon): 6820544 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621516 kB' 'Mapped: 171860 kB' 'Shmem: 6202320 kB' 'KReclaimable: 294544 kB' 'Slab: 1064348 kB' 'SReclaimable: 294544 kB' 'SUnreclaim: 769804 kB' 'KernelStack: 27008 kB' 'PageTables: 8568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8199824 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234732 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.326 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.326 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.327 14:40:30 -- setup/common.sh@33 -- # echo 1024 00:03:48.327 14:40:30 -- setup/common.sh@33 -- # return 0 00:03:48.327 14:40:30 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.327 14:40:30 -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.327 14:40:30 -- setup/hugepages.sh@27 -- # local node 00:03:48.327 14:40:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.327 14:40:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.327 14:40:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.327 14:40:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.327 14:40:30 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.327 14:40:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.327 14:40:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.327 14:40:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.327 14:40:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.327 14:40:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.327 14:40:30 -- setup/common.sh@18 -- # local node=0 00:03:48.327 14:40:30 -- setup/common.sh@19 -- # local var val 00:03:48.327 14:40:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.327 14:40:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.327 14:40:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.327 14:40:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.327 14:40:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.327 14:40:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59865316 kB' 'MemUsed: 5793692 kB' 'SwapCached: 0 kB' 'Active: 2528112 kB' 'Inactive: 106348 kB' 'Active(anon): 2218592 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 106348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2479284 kB' 'Mapped: 96476 kB' 'AnonPages: 158820 kB' 'Shmem: 2063416 kB' 'KernelStack: 12632 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159296 kB' 'Slab: 530772 kB' 'SReclaimable: 159296 kB' 'SUnreclaim: 371476 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.327 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.327 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@33 -- # echo 0 00:03:48.328 14:40:30 -- setup/common.sh@33 -- # return 0 00:03:48.328 14:40:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.328 14:40:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.328 14:40:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.328 14:40:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:48.328 14:40:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.328 14:40:30 -- setup/common.sh@18 -- # local node=1 00:03:48.328 14:40:30 -- setup/common.sh@19 -- # local var val 00:03:48.328 14:40:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.328 14:40:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.328 14:40:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:48.328 14:40:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:48.328 14:40:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.328 14:40:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679860 kB' 'MemFree: 49505368 kB' 'MemUsed: 11174492 kB' 'SwapCached: 0 kB' 'Active: 4982812 kB' 'Inactive: 3409368 kB' 'Active(anon): 4601936 kB' 'Inactive(anon): 0 kB' 'Active(file): 380876 kB' 'Inactive(file): 3409368 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7929172 kB' 'Mapped: 75384 kB' 'AnonPages: 463184 kB' 'Shmem: 4138928 kB' 'KernelStack: 14344 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135248 kB' 'Slab: 533580 kB' 'SReclaimable: 135248 kB' 'SUnreclaim: 398332 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.328 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.328 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # continue 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.329 14:40:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.329 14:40:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.329 14:40:30 -- setup/common.sh@33 -- # echo 0 00:03:48.329 14:40:30 -- setup/common.sh@33 -- # return 0 00:03:48.329 14:40:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.329 14:40:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.329 14:40:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.329 14:40:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.329 14:40:30 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:48.329 node0=512 expecting 512 00:03:48.330 14:40:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.330 14:40:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.330 14:40:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.330 14:40:30 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:48.330 node1=512 expecting 512 00:03:48.330 14:40:30 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:48.330 00:03:48.330 real 0m3.901s 00:03:48.330 user 0m1.522s 00:03:48.330 sys 0m2.425s 00:03:48.330 14:40:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:48.330 14:40:30 -- common/autotest_common.sh@10 -- # set +x 00:03:48.330 ************************************ 00:03:48.330 END TEST per_node_1G_alloc 00:03:48.330 ************************************ 00:03:48.330 14:40:30 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:48.330 14:40:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:48.330 14:40:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:48.330 14:40:30 -- common/autotest_common.sh@10 -- # set +x 00:03:48.596 ************************************ 00:03:48.596 START TEST even_2G_alloc 00:03:48.596 ************************************ 00:03:48.596 14:40:31 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:03:48.596 14:40:31 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:48.596 14:40:31 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:48.596 14:40:31 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:48.596 14:40:31 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.596 14:40:31 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:48.596 14:40:31 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:48.596 14:40:31 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.596 14:40:31 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.596 14:40:31 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:48.596 14:40:31 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.596 14:40:31 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.596 14:40:31 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.596 14:40:31 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.596 14:40:31 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:48.596 14:40:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.596 14:40:31 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:48.596 14:40:31 -- setup/hugepages.sh@83 -- # : 512 00:03:48.596 14:40:31 -- setup/hugepages.sh@84 -- # : 1 00:03:48.596 14:40:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.596 14:40:31 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:48.596 14:40:31 -- setup/hugepages.sh@83 -- # : 0 00:03:48.596 14:40:31 -- setup/hugepages.sh@84 -- # : 0 00:03:48.596 14:40:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.596 14:40:31 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:48.596 14:40:31 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:48.596 14:40:31 -- setup/hugepages.sh@153 -- # setup output 00:03:48.596 14:40:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.596 14:40:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:51.895 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:51.895 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:51.895 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:51.895 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:51.895 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:51.895 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:51.895 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:51.895 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:51.895 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:51.895 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:51.895 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:51.895 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:51.895 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:51.895 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:51.895 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:51.895 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:51.895 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:52.159 14:40:34 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:52.159 14:40:34 -- setup/hugepages.sh@89 -- # local node 00:03:52.159 14:40:34 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.159 14:40:34 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.159 14:40:34 -- setup/hugepages.sh@92 -- # local surp 00:03:52.159 14:40:34 -- setup/hugepages.sh@93 -- # local resv 00:03:52.159 14:40:34 -- setup/hugepages.sh@94 -- # local anon 00:03:52.159 14:40:34 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.159 14:40:34 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.159 14:40:34 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.159 14:40:34 -- setup/common.sh@18 -- # local node= 00:03:52.159 14:40:34 -- setup/common.sh@19 -- # local var val 00:03:52.159 14:40:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.159 14:40:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.159 14:40:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.159 14:40:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.159 14:40:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.159 14:40:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109405728 kB' 'MemAvailable: 112934780 kB' 'Buffers: 4124 kB' 'Cached: 10404424 kB' 'SwapCached: 0 kB' 'Active: 7511904 kB' 'Inactive: 3515716 kB' 'Active(anon): 6821508 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621976 kB' 'Mapped: 172016 kB' 'Shmem: 6202436 kB' 'KReclaimable: 294544 kB' 'Slab: 1063968 kB' 'SReclaimable: 294544 kB' 'SUnreclaim: 769424 kB' 'KernelStack: 26992 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8199432 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234780 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.159 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.159 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.160 14:40:34 -- setup/common.sh@33 -- # echo 0 00:03:52.160 14:40:34 -- setup/common.sh@33 -- # return 0 00:03:52.160 14:40:34 -- setup/hugepages.sh@97 -- # anon=0 00:03:52.160 14:40:34 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.160 14:40:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.160 14:40:34 -- setup/common.sh@18 -- # local node= 00:03:52.160 14:40:34 -- setup/common.sh@19 -- # local var val 00:03:52.160 14:40:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.160 14:40:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.160 14:40:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.160 14:40:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.160 14:40:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.160 14:40:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109407456 kB' 'MemAvailable: 112936508 kB' 'Buffers: 4124 kB' 'Cached: 10404424 kB' 'SwapCached: 0 kB' 'Active: 7511564 kB' 'Inactive: 3515716 kB' 'Active(anon): 6821168 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622116 kB' 'Mapped: 171896 kB' 'Shmem: 6202436 kB' 'KReclaimable: 294544 kB' 'Slab: 1063924 kB' 'SReclaimable: 294544 kB' 'SUnreclaim: 769380 kB' 'KernelStack: 26976 kB' 'PageTables: 8452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8199444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234748 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.160 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.160 14:40:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.161 14:40:34 -- setup/common.sh@33 -- # echo 0 00:03:52.161 14:40:34 -- setup/common.sh@33 -- # return 0 00:03:52.161 14:40:34 -- setup/hugepages.sh@99 -- # surp=0 00:03:52.161 14:40:34 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.161 14:40:34 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.161 14:40:34 -- setup/common.sh@18 -- # local node= 00:03:52.161 14:40:34 -- setup/common.sh@19 -- # local var val 00:03:52.161 14:40:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.161 14:40:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.161 14:40:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.161 14:40:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.161 14:40:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.161 14:40:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109407500 kB' 'MemAvailable: 112936552 kB' 'Buffers: 4124 kB' 'Cached: 10404436 kB' 'SwapCached: 0 kB' 'Active: 7511620 kB' 'Inactive: 3515716 kB' 'Active(anon): 6821224 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622116 kB' 'Mapped: 171896 kB' 'Shmem: 6202448 kB' 'KReclaimable: 294544 kB' 'Slab: 1063924 kB' 'SReclaimable: 294544 kB' 'SUnreclaim: 769380 kB' 'KernelStack: 26976 kB' 'PageTables: 8452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8199456 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234748 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.161 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.161 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.162 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.162 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.163 14:40:34 -- setup/common.sh@33 -- # echo 0 00:03:52.163 14:40:34 -- setup/common.sh@33 -- # return 0 00:03:52.163 14:40:34 -- setup/hugepages.sh@100 -- # resv=0 00:03:52.163 14:40:34 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:52.163 nr_hugepages=1024 00:03:52.163 14:40:34 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.163 resv_hugepages=0 00:03:52.163 14:40:34 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.163 surplus_hugepages=0 00:03:52.163 14:40:34 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.163 anon_hugepages=0 00:03:52.163 14:40:34 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.163 14:40:34 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:52.163 14:40:34 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.163 14:40:34 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.163 14:40:34 -- setup/common.sh@18 -- # local node= 00:03:52.163 14:40:34 -- setup/common.sh@19 -- # local var val 00:03:52.163 14:40:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.163 14:40:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.163 14:40:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.163 14:40:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.163 14:40:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.163 14:40:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.163 14:40:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109408488 kB' 'MemAvailable: 112937540 kB' 'Buffers: 4124 kB' 'Cached: 10404456 kB' 'SwapCached: 0 kB' 'Active: 7511936 kB' 'Inactive: 3515716 kB' 'Active(anon): 6821540 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622460 kB' 'Mapped: 171896 kB' 'Shmem: 6202468 kB' 'KReclaimable: 294544 kB' 'Slab: 1063924 kB' 'SReclaimable: 294544 kB' 'SUnreclaim: 769380 kB' 'KernelStack: 26976 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8199472 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234748 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.163 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.163 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.164 14:40:34 -- setup/common.sh@33 -- # echo 1024 00:03:52.164 14:40:34 -- setup/common.sh@33 -- # return 0 00:03:52.164 14:40:34 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.164 14:40:34 -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.164 14:40:34 -- setup/hugepages.sh@27 -- # local node 00:03:52.164 14:40:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.164 14:40:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.164 14:40:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.164 14:40:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.164 14:40:34 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.164 14:40:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.164 14:40:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.164 14:40:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.164 14:40:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.164 14:40:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.164 14:40:34 -- setup/common.sh@18 -- # local node=0 00:03:52.164 14:40:34 -- setup/common.sh@19 -- # local var val 00:03:52.164 14:40:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.164 14:40:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.164 14:40:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.164 14:40:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.164 14:40:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.164 14:40:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59900380 kB' 'MemUsed: 5758628 kB' 'SwapCached: 0 kB' 'Active: 2528028 kB' 'Inactive: 106348 kB' 'Active(anon): 2218508 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 106348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2479380 kB' 'Mapped: 96504 kB' 'AnonPages: 158200 kB' 'Shmem: 2063512 kB' 'KernelStack: 12584 kB' 'PageTables: 3944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159296 kB' 'Slab: 530732 kB' 'SReclaimable: 159296 kB' 'SUnreclaim: 371436 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.164 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.164 14:40:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.165 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.165 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.165 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.165 14:40:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.165 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.165 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.165 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.165 14:40:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.165 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.165 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.165 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.165 14:40:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.165 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.165 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.165 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.165 14:40:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.165 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.165 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.165 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.165 14:40:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.165 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.165 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.165 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.165 14:40:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.165 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.165 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.165 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.426 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.426 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.426 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.426 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.426 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.426 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.426 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.426 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.426 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.426 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.426 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.426 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.426 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.426 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.426 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.426 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.426 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.426 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.426 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.426 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.426 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.426 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.426 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.426 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.426 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.426 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.426 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.426 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@33 -- # echo 0 00:03:52.427 14:40:34 -- setup/common.sh@33 -- # return 0 00:03:52.427 14:40:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.427 14:40:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.427 14:40:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.427 14:40:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:52.427 14:40:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.427 14:40:34 -- setup/common.sh@18 -- # local node=1 00:03:52.427 14:40:34 -- setup/common.sh@19 -- # local var val 00:03:52.427 14:40:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.427 14:40:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.427 14:40:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:52.427 14:40:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:52.427 14:40:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.427 14:40:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679860 kB' 'MemFree: 49508804 kB' 'MemUsed: 11171056 kB' 'SwapCached: 0 kB' 'Active: 4982964 kB' 'Inactive: 3409368 kB' 'Active(anon): 4602088 kB' 'Inactive(anon): 0 kB' 'Active(file): 380876 kB' 'Inactive(file): 3409368 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7929220 kB' 'Mapped: 75392 kB' 'AnonPages: 463200 kB' 'Shmem: 4138976 kB' 'KernelStack: 14360 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135248 kB' 'Slab: 533184 kB' 'SReclaimable: 135248 kB' 'SUnreclaim: 397936 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # continue 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.427 14:40:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.427 14:40:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.427 14:40:34 -- setup/common.sh@33 -- # echo 0 00:03:52.427 14:40:34 -- setup/common.sh@33 -- # return 0 00:03:52.428 14:40:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.428 14:40:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.428 14:40:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.428 14:40:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.428 14:40:34 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:52.428 node0=512 expecting 512 00:03:52.428 14:40:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.428 14:40:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.428 14:40:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.428 14:40:34 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:52.428 node1=512 expecting 512 00:03:52.428 14:40:34 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:52.428 00:03:52.428 real 0m3.831s 00:03:52.428 user 0m1.573s 00:03:52.428 sys 0m2.312s 00:03:52.428 14:40:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:52.428 14:40:34 -- common/autotest_common.sh@10 -- # set +x 00:03:52.428 ************************************ 00:03:52.428 END TEST even_2G_alloc 00:03:52.428 ************************************ 00:03:52.428 14:40:34 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:52.428 14:40:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:52.428 14:40:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:52.428 14:40:34 -- common/autotest_common.sh@10 -- # set +x 00:03:52.428 ************************************ 00:03:52.428 START TEST odd_alloc 00:03:52.428 ************************************ 00:03:52.428 14:40:35 -- common/autotest_common.sh@1111 -- # odd_alloc 00:03:52.428 14:40:35 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:52.428 14:40:35 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:52.428 14:40:35 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:52.428 14:40:35 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.428 14:40:35 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:52.428 14:40:35 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:52.428 14:40:35 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:52.428 14:40:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.428 14:40:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:52.428 14:40:35 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.428 14:40:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.428 14:40:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.428 14:40:35 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:52.428 14:40:35 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:52.428 14:40:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.428 14:40:35 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:52.428 14:40:35 -- setup/hugepages.sh@83 -- # : 513 00:03:52.428 14:40:35 -- setup/hugepages.sh@84 -- # : 1 00:03:52.428 14:40:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.428 14:40:35 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:52.428 14:40:35 -- setup/hugepages.sh@83 -- # : 0 00:03:52.428 14:40:35 -- setup/hugepages.sh@84 -- # : 0 00:03:52.428 14:40:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.428 14:40:35 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:52.428 14:40:35 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:52.428 14:40:35 -- setup/hugepages.sh@160 -- # setup output 00:03:52.428 14:40:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.428 14:40:35 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.748 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:56.072 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:56.072 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:56.072 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:56.072 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:56.072 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:56.072 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:56.072 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:56.072 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:56.072 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:56.072 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:56.072 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:56.072 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:56.072 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:56.072 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:56.072 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:56.072 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:56.355 14:40:38 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:56.355 14:40:38 -- setup/hugepages.sh@89 -- # local node 00:03:56.355 14:40:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:56.355 14:40:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:56.355 14:40:38 -- setup/hugepages.sh@92 -- # local surp 00:03:56.355 14:40:38 -- setup/hugepages.sh@93 -- # local resv 00:03:56.355 14:40:38 -- setup/hugepages.sh@94 -- # local anon 00:03:56.355 14:40:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:56.355 14:40:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:56.355 14:40:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:56.355 14:40:38 -- setup/common.sh@18 -- # local node= 00:03:56.355 14:40:38 -- setup/common.sh@19 -- # local var val 00:03:56.355 14:40:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.355 14:40:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.355 14:40:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.355 14:40:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.355 14:40:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.355 14:40:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.355 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.355 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.355 14:40:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109410760 kB' 'MemAvailable: 112939812 kB' 'Buffers: 4124 kB' 'Cached: 10404568 kB' 'SwapCached: 0 kB' 'Active: 7513136 kB' 'Inactive: 3515716 kB' 'Active(anon): 6822740 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623468 kB' 'Mapped: 171964 kB' 'Shmem: 6202580 kB' 'KReclaimable: 294544 kB' 'Slab: 1063896 kB' 'SReclaimable: 294544 kB' 'SUnreclaim: 769352 kB' 'KernelStack: 26976 kB' 'PageTables: 8428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 8202892 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234796 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:03:56.355 14:40:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.355 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.355 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.355 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.355 14:40:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.355 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.355 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.355 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.355 14:40:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.355 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.355 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.355 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.355 14:40:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.355 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.355 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.355 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.355 14:40:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.355 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.355 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.355 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.355 14:40:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.355 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.355 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.355 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.355 14:40:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.355 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.355 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.355 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.355 14:40:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.355 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.355 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.355 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.355 14:40:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.355 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.356 14:40:38 -- setup/common.sh@33 -- # echo 0 00:03:56.356 14:40:38 -- setup/common.sh@33 -- # return 0 00:03:56.356 14:40:38 -- setup/hugepages.sh@97 -- # anon=0 00:03:56.356 14:40:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:56.356 14:40:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.356 14:40:38 -- setup/common.sh@18 -- # local node= 00:03:56.356 14:40:38 -- setup/common.sh@19 -- # local var val 00:03:56.356 14:40:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.356 14:40:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.356 14:40:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.356 14:40:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.356 14:40:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.356 14:40:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.356 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.356 14:40:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109413040 kB' 'MemAvailable: 112942092 kB' 'Buffers: 4124 kB' 'Cached: 10404588 kB' 'SwapCached: 0 kB' 'Active: 7513504 kB' 'Inactive: 3515716 kB' 'Active(anon): 6823108 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623968 kB' 'Mapped: 171920 kB' 'Shmem: 6202600 kB' 'KReclaimable: 294544 kB' 'Slab: 1063948 kB' 'SReclaimable: 294544 kB' 'SUnreclaim: 769404 kB' 'KernelStack: 27008 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 8203032 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234796 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.357 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.357 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.358 14:40:38 -- setup/common.sh@33 -- # echo 0 00:03:56.358 14:40:38 -- setup/common.sh@33 -- # return 0 00:03:56.358 14:40:38 -- setup/hugepages.sh@99 -- # surp=0 00:03:56.358 14:40:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:56.358 14:40:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:56.358 14:40:38 -- setup/common.sh@18 -- # local node= 00:03:56.358 14:40:38 -- setup/common.sh@19 -- # local var val 00:03:56.358 14:40:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.358 14:40:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.358 14:40:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.358 14:40:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.358 14:40:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.358 14:40:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109416500 kB' 'MemAvailable: 112945552 kB' 'Buffers: 4124 kB' 'Cached: 10404600 kB' 'SwapCached: 0 kB' 'Active: 7513572 kB' 'Inactive: 3515716 kB' 'Active(anon): 6823176 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624080 kB' 'Mapped: 171928 kB' 'Shmem: 6202612 kB' 'KReclaimable: 294544 kB' 'Slab: 1064044 kB' 'SReclaimable: 294544 kB' 'SUnreclaim: 769500 kB' 'KernelStack: 26960 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 8203416 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234748 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.358 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.358 14:40:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.359 14:40:38 -- setup/common.sh@33 -- # echo 0 00:03:56.359 14:40:38 -- setup/common.sh@33 -- # return 0 00:03:56.359 14:40:38 -- setup/hugepages.sh@100 -- # resv=0 00:03:56.359 14:40:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:56.359 nr_hugepages=1025 00:03:56.359 14:40:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:56.359 resv_hugepages=0 00:03:56.359 14:40:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:56.359 surplus_hugepages=0 00:03:56.359 14:40:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:56.359 anon_hugepages=0 00:03:56.359 14:40:38 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:56.359 14:40:38 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:56.359 14:40:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:56.359 14:40:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:56.359 14:40:38 -- setup/common.sh@18 -- # local node= 00:03:56.359 14:40:38 -- setup/common.sh@19 -- # local var val 00:03:56.359 14:40:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.359 14:40:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.359 14:40:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.359 14:40:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.359 14:40:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.359 14:40:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.359 14:40:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109418768 kB' 'MemAvailable: 112947820 kB' 'Buffers: 4124 kB' 'Cached: 10404612 kB' 'SwapCached: 0 kB' 'Active: 7513012 kB' 'Inactive: 3515716 kB' 'Active(anon): 6822616 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623888 kB' 'Mapped: 171928 kB' 'Shmem: 6202624 kB' 'KReclaimable: 294544 kB' 'Slab: 1064044 kB' 'SReclaimable: 294544 kB' 'SUnreclaim: 769500 kB' 'KernelStack: 27040 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 8203428 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234812 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.359 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.359 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.360 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.360 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.361 14:40:38 -- setup/common.sh@33 -- # echo 1025 00:03:56.361 14:40:38 -- setup/common.sh@33 -- # return 0 00:03:56.361 14:40:38 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:56.361 14:40:38 -- setup/hugepages.sh@112 -- # get_nodes 00:03:56.361 14:40:38 -- setup/hugepages.sh@27 -- # local node 00:03:56.361 14:40:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.361 14:40:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:56.361 14:40:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.361 14:40:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:56.361 14:40:38 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:56.361 14:40:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:56.361 14:40:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.361 14:40:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.361 14:40:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:56.361 14:40:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.361 14:40:38 -- setup/common.sh@18 -- # local node=0 00:03:56.361 14:40:38 -- setup/common.sh@19 -- # local var val 00:03:56.361 14:40:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.361 14:40:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.361 14:40:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:56.361 14:40:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:56.361 14:40:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.361 14:40:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59912324 kB' 'MemUsed: 5746684 kB' 'SwapCached: 0 kB' 'Active: 2530576 kB' 'Inactive: 106348 kB' 'Active(anon): 2221056 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 106348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2479448 kB' 'Mapped: 96536 kB' 'AnonPages: 160644 kB' 'Shmem: 2063580 kB' 'KernelStack: 12760 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159296 kB' 'Slab: 530788 kB' 'SReclaimable: 159296 kB' 'SUnreclaim: 371492 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.361 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.361 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@33 -- # echo 0 00:03:56.362 14:40:38 -- setup/common.sh@33 -- # return 0 00:03:56.362 14:40:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.362 14:40:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.362 14:40:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.362 14:40:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:56.362 14:40:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.362 14:40:38 -- setup/common.sh@18 -- # local node=1 00:03:56.362 14:40:38 -- setup/common.sh@19 -- # local var val 00:03:56.362 14:40:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.362 14:40:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.362 14:40:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:56.362 14:40:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:56.362 14:40:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.362 14:40:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679860 kB' 'MemFree: 49508956 kB' 'MemUsed: 11170904 kB' 'SwapCached: 0 kB' 'Active: 4983512 kB' 'Inactive: 3409368 kB' 'Active(anon): 4602636 kB' 'Inactive(anon): 0 kB' 'Active(file): 380876 kB' 'Inactive(file): 3409368 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7929316 kB' 'Mapped: 75392 kB' 'AnonPages: 463760 kB' 'Shmem: 4139072 kB' 'KernelStack: 14360 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135248 kB' 'Slab: 533256 kB' 'SReclaimable: 135248 kB' 'SUnreclaim: 398008 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.362 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.362 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # continue 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.363 14:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.363 14:40:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.363 14:40:38 -- setup/common.sh@33 -- # echo 0 00:03:56.363 14:40:38 -- setup/common.sh@33 -- # return 0 00:03:56.363 14:40:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.363 14:40:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.363 14:40:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.363 14:40:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.363 14:40:38 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:56.363 node0=512 expecting 513 00:03:56.363 14:40:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.363 14:40:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.363 14:40:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.363 14:40:38 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:56.363 node1=513 expecting 512 00:03:56.363 14:40:38 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:56.363 00:03:56.363 real 0m3.916s 00:03:56.363 user 0m1.582s 00:03:56.363 sys 0m2.387s 00:03:56.363 14:40:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:56.363 14:40:38 -- common/autotest_common.sh@10 -- # set +x 00:03:56.363 ************************************ 00:03:56.363 END TEST odd_alloc 00:03:56.363 ************************************ 00:03:56.363 14:40:39 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:56.363 14:40:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:56.363 14:40:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:56.363 14:40:39 -- common/autotest_common.sh@10 -- # set +x 00:03:56.624 ************************************ 00:03:56.624 START TEST custom_alloc 00:03:56.624 ************************************ 00:03:56.624 14:40:39 -- common/autotest_common.sh@1111 -- # custom_alloc 00:03:56.624 14:40:39 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:56.624 14:40:39 -- setup/hugepages.sh@169 -- # local node 00:03:56.624 14:40:39 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:56.624 14:40:39 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:56.624 14:40:39 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:56.624 14:40:39 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:56.624 14:40:39 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:56.624 14:40:39 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:56.624 14:40:39 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:56.624 14:40:39 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:56.624 14:40:39 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:56.624 14:40:39 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:56.624 14:40:39 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.624 14:40:39 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:56.624 14:40:39 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:56.624 14:40:39 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.624 14:40:39 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.624 14:40:39 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:56.624 14:40:39 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:56.624 14:40:39 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.624 14:40:39 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:56.624 14:40:39 -- setup/hugepages.sh@83 -- # : 256 00:03:56.624 14:40:39 -- setup/hugepages.sh@84 -- # : 1 00:03:56.624 14:40:39 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.624 14:40:39 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:56.624 14:40:39 -- setup/hugepages.sh@83 -- # : 0 00:03:56.624 14:40:39 -- setup/hugepages.sh@84 -- # : 0 00:03:56.624 14:40:39 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.624 14:40:39 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:56.624 14:40:39 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:56.624 14:40:39 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:56.624 14:40:39 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:56.624 14:40:39 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:56.624 14:40:39 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:56.624 14:40:39 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:56.624 14:40:39 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:56.624 14:40:39 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:56.624 14:40:39 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.624 14:40:39 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:56.624 14:40:39 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:56.624 14:40:39 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.624 14:40:39 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.624 14:40:39 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:56.624 14:40:39 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:56.624 14:40:39 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:56.624 14:40:39 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:56.625 14:40:39 -- setup/hugepages.sh@78 -- # return 0 00:03:56.625 14:40:39 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:56.625 14:40:39 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:56.625 14:40:39 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:56.625 14:40:39 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:56.625 14:40:39 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:56.625 14:40:39 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:56.625 14:40:39 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:56.625 14:40:39 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:56.625 14:40:39 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:56.625 14:40:39 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.625 14:40:39 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:56.625 14:40:39 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:56.625 14:40:39 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.625 14:40:39 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.625 14:40:39 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:56.625 14:40:39 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:56.625 14:40:39 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:56.625 14:40:39 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:56.625 14:40:39 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:56.625 14:40:39 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:56.625 14:40:39 -- setup/hugepages.sh@78 -- # return 0 00:03:56.625 14:40:39 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:56.625 14:40:39 -- setup/hugepages.sh@187 -- # setup output 00:03:56.625 14:40:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.625 14:40:39 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.919 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:59.920 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:59.920 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:59.920 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:59.920 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:59.920 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:59.920 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:59.920 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:59.920 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:59.920 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:59.920 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:59.920 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:59.920 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:59.920 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:59.920 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:59.920 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:59.920 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:00.494 14:40:42 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:00.494 14:40:42 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:00.494 14:40:42 -- setup/hugepages.sh@89 -- # local node 00:04:00.494 14:40:42 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.494 14:40:42 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.494 14:40:42 -- setup/hugepages.sh@92 -- # local surp 00:04:00.494 14:40:42 -- setup/hugepages.sh@93 -- # local resv 00:04:00.494 14:40:42 -- setup/hugepages.sh@94 -- # local anon 00:04:00.494 14:40:42 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.494 14:40:42 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.494 14:40:42 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.494 14:40:42 -- setup/common.sh@18 -- # local node= 00:04:00.494 14:40:42 -- setup/common.sh@19 -- # local var val 00:04:00.494 14:40:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:00.494 14:40:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.494 14:40:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.494 14:40:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.494 14:40:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.494 14:40:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.494 14:40:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108356424 kB' 'MemAvailable: 111885444 kB' 'Buffers: 4124 kB' 'Cached: 10404728 kB' 'SwapCached: 0 kB' 'Active: 7514636 kB' 'Inactive: 3515716 kB' 'Active(anon): 6824240 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624348 kB' 'Mapped: 172080 kB' 'Shmem: 6202740 kB' 'KReclaimable: 294480 kB' 'Slab: 1064376 kB' 'SReclaimable: 294480 kB' 'SUnreclaim: 769896 kB' 'KernelStack: 26880 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 8200816 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234844 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.494 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.494 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.495 14:40:42 -- setup/common.sh@33 -- # echo 0 00:04:00.495 14:40:42 -- setup/common.sh@33 -- # return 0 00:04:00.495 14:40:42 -- setup/hugepages.sh@97 -- # anon=0 00:04:00.495 14:40:42 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.495 14:40:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.495 14:40:42 -- setup/common.sh@18 -- # local node= 00:04:00.495 14:40:42 -- setup/common.sh@19 -- # local var val 00:04:00.495 14:40:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:00.495 14:40:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.495 14:40:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.495 14:40:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.495 14:40:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.495 14:40:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108356900 kB' 'MemAvailable: 111885920 kB' 'Buffers: 4124 kB' 'Cached: 10404728 kB' 'SwapCached: 0 kB' 'Active: 7513720 kB' 'Inactive: 3515716 kB' 'Active(anon): 6823324 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623408 kB' 'Mapped: 172036 kB' 'Shmem: 6202740 kB' 'KReclaimable: 294480 kB' 'Slab: 1064424 kB' 'SReclaimable: 294480 kB' 'SUnreclaim: 769944 kB' 'KernelStack: 26944 kB' 'PageTables: 8360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 8200828 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234812 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.495 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.495 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.496 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.496 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.497 14:40:42 -- setup/common.sh@33 -- # echo 0 00:04:00.497 14:40:42 -- setup/common.sh@33 -- # return 0 00:04:00.497 14:40:42 -- setup/hugepages.sh@99 -- # surp=0 00:04:00.497 14:40:42 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.497 14:40:42 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.497 14:40:42 -- setup/common.sh@18 -- # local node= 00:04:00.497 14:40:42 -- setup/common.sh@19 -- # local var val 00:04:00.497 14:40:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:00.497 14:40:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.497 14:40:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.497 14:40:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.497 14:40:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.497 14:40:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108357796 kB' 'MemAvailable: 111886784 kB' 'Buffers: 4124 kB' 'Cached: 10404740 kB' 'SwapCached: 0 kB' 'Active: 7513248 kB' 'Inactive: 3515716 kB' 'Active(anon): 6822852 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623432 kB' 'Mapped: 171960 kB' 'Shmem: 6202752 kB' 'KReclaimable: 294416 kB' 'Slab: 1064324 kB' 'SReclaimable: 294416 kB' 'SUnreclaim: 769908 kB' 'KernelStack: 26960 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 8200844 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234812 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.497 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.497 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.498 14:40:42 -- setup/common.sh@33 -- # echo 0 00:04:00.498 14:40:42 -- setup/common.sh@33 -- # return 0 00:04:00.498 14:40:42 -- setup/hugepages.sh@100 -- # resv=0 00:04:00.498 14:40:42 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:00.498 nr_hugepages=1536 00:04:00.498 14:40:42 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.498 resv_hugepages=0 00:04:00.498 14:40:42 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.498 surplus_hugepages=0 00:04:00.498 14:40:42 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.498 anon_hugepages=0 00:04:00.498 14:40:42 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:00.498 14:40:42 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:00.498 14:40:42 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.498 14:40:42 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.498 14:40:42 -- setup/common.sh@18 -- # local node= 00:04:00.498 14:40:42 -- setup/common.sh@19 -- # local var val 00:04:00.498 14:40:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:00.498 14:40:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.498 14:40:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.498 14:40:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.498 14:40:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.498 14:40:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 108358656 kB' 'MemAvailable: 111887644 kB' 'Buffers: 4124 kB' 'Cached: 10404756 kB' 'SwapCached: 0 kB' 'Active: 7513260 kB' 'Inactive: 3515716 kB' 'Active(anon): 6822864 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623428 kB' 'Mapped: 171960 kB' 'Shmem: 6202768 kB' 'KReclaimable: 294416 kB' 'Slab: 1064324 kB' 'SReclaimable: 294416 kB' 'SUnreclaim: 769908 kB' 'KernelStack: 26960 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 8200856 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234812 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.498 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.498 14:40:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.499 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.499 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.500 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.500 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.500 14:40:42 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.500 14:40:42 -- setup/common.sh@33 -- # echo 1536 00:04:00.500 14:40:42 -- setup/common.sh@33 -- # return 0 00:04:00.500 14:40:42 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:00.500 14:40:42 -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.500 14:40:42 -- setup/hugepages.sh@27 -- # local node 00:04:00.500 14:40:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.500 14:40:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:00.500 14:40:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.500 14:40:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:00.500 14:40:43 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:00.500 14:40:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.500 14:40:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.500 14:40:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.500 14:40:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.500 14:40:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.500 14:40:43 -- setup/common.sh@18 -- # local node=0 00:04:00.500 14:40:43 -- setup/common.sh@19 -- # local var val 00:04:00.500 14:40:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:00.500 14:40:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.500 14:40:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.500 14:40:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.500 14:40:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.500 14:40:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59904216 kB' 'MemUsed: 5754792 kB' 'SwapCached: 0 kB' 'Active: 2529356 kB' 'Inactive: 106348 kB' 'Active(anon): 2219836 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 106348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2479492 kB' 'Mapped: 96572 kB' 'AnonPages: 159352 kB' 'Shmem: 2063624 kB' 'KernelStack: 12536 kB' 'PageTables: 3920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159296 kB' 'Slab: 530716 kB' 'SReclaimable: 159296 kB' 'SUnreclaim: 371420 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.500 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.500 14:40:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@33 -- # echo 0 00:04:00.501 14:40:43 -- setup/common.sh@33 -- # return 0 00:04:00.501 14:40:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.501 14:40:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.501 14:40:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.501 14:40:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:00.501 14:40:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.501 14:40:43 -- setup/common.sh@18 -- # local node=1 00:04:00.501 14:40:43 -- setup/common.sh@19 -- # local var val 00:04:00.501 14:40:43 -- setup/common.sh@20 -- # local mem_f mem 00:04:00.501 14:40:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.501 14:40:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:00.501 14:40:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:00.501 14:40:43 -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.501 14:40:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679860 kB' 'MemFree: 48454392 kB' 'MemUsed: 12225468 kB' 'SwapCached: 0 kB' 'Active: 4983948 kB' 'Inactive: 3409368 kB' 'Active(anon): 4603072 kB' 'Inactive(anon): 0 kB' 'Active(file): 380876 kB' 'Inactive(file): 3409368 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7929416 kB' 'Mapped: 75388 kB' 'AnonPages: 464064 kB' 'Shmem: 4139172 kB' 'KernelStack: 14424 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135120 kB' 'Slab: 533608 kB' 'SReclaimable: 135120 kB' 'SUnreclaim: 398488 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.501 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.501 14:40:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.502 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.502 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.502 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.502 14:40:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.502 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.502 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.502 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.502 14:40:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.502 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.502 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.502 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.502 14:40:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.502 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.502 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.502 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.502 14:40:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.502 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.502 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.502 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.502 14:40:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.502 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.502 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.502 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.502 14:40:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.502 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.502 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.502 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.502 14:40:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.502 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.502 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.502 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.502 14:40:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.502 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.502 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.502 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.502 14:40:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.502 14:40:43 -- setup/common.sh@32 -- # continue 00:04:00.502 14:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.502 14:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.502 14:40:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.502 14:40:43 -- setup/common.sh@33 -- # echo 0 00:04:00.502 14:40:43 -- setup/common.sh@33 -- # return 0 00:04:00.502 14:40:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.502 14:40:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.502 14:40:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.502 14:40:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.502 14:40:43 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:00.502 node0=512 expecting 512 00:04:00.502 14:40:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.502 14:40:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.502 14:40:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.502 14:40:43 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:00.502 node1=1024 expecting 1024 00:04:00.502 14:40:43 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:00.502 00:04:00.502 real 0m3.890s 00:04:00.502 user 0m1.633s 00:04:00.502 sys 0m2.310s 00:04:00.502 14:40:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:00.502 14:40:43 -- common/autotest_common.sh@10 -- # set +x 00:04:00.502 ************************************ 00:04:00.502 END TEST custom_alloc 00:04:00.502 ************************************ 00:04:00.502 14:40:43 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:00.502 14:40:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:00.502 14:40:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:00.502 14:40:43 -- common/autotest_common.sh@10 -- # set +x 00:04:00.761 ************************************ 00:04:00.761 START TEST no_shrink_alloc 00:04:00.761 ************************************ 00:04:00.761 14:40:43 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:04:00.761 14:40:43 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:00.761 14:40:43 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:00.761 14:40:43 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:00.761 14:40:43 -- setup/hugepages.sh@51 -- # shift 00:04:00.761 14:40:43 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:00.761 14:40:43 -- setup/hugepages.sh@52 -- # local node_ids 00:04:00.761 14:40:43 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.761 14:40:43 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:00.761 14:40:43 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:00.761 14:40:43 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:00.761 14:40:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.761 14:40:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:00.761 14:40:43 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:00.762 14:40:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.762 14:40:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.762 14:40:43 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:00.762 14:40:43 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:00.762 14:40:43 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:00.762 14:40:43 -- setup/hugepages.sh@73 -- # return 0 00:04:00.762 14:40:43 -- setup/hugepages.sh@198 -- # setup output 00:04:00.762 14:40:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.762 14:40:43 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:04.057 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:04.057 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:04.057 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:04.057 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:04.057 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:04.057 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:04.057 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:04.057 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:04.057 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:04.057 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:04.057 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:04.057 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:04.057 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:04.057 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:04.057 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:04.057 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:04.057 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:04.319 14:40:46 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:04.319 14:40:46 -- setup/hugepages.sh@89 -- # local node 00:04:04.319 14:40:46 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.319 14:40:46 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.319 14:40:46 -- setup/hugepages.sh@92 -- # local surp 00:04:04.319 14:40:46 -- setup/hugepages.sh@93 -- # local resv 00:04:04.319 14:40:46 -- setup/hugepages.sh@94 -- # local anon 00:04:04.319 14:40:46 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.319 14:40:46 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.319 14:40:46 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.319 14:40:46 -- setup/common.sh@18 -- # local node= 00:04:04.319 14:40:46 -- setup/common.sh@19 -- # local var val 00:04:04.319 14:40:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.319 14:40:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.319 14:40:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.319 14:40:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.319 14:40:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.319 14:40:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.319 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.319 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.319 14:40:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109376812 kB' 'MemAvailable: 112905800 kB' 'Buffers: 4124 kB' 'Cached: 10404868 kB' 'SwapCached: 0 kB' 'Active: 7515444 kB' 'Inactive: 3515716 kB' 'Active(anon): 6825048 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625112 kB' 'Mapped: 172076 kB' 'Shmem: 6202880 kB' 'KReclaimable: 294416 kB' 'Slab: 1064124 kB' 'SReclaimable: 294416 kB' 'SUnreclaim: 769708 kB' 'KernelStack: 26976 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8202076 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234604 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:04:04.319 14:40:46 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.319 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.319 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.319 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.319 14:40:46 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.319 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.319 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.319 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.319 14:40:46 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.319 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.319 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.319 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.319 14:40:46 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.319 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.319 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.319 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.319 14:40:46 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.319 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.319 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.319 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.319 14:40:46 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.319 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.319 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.319 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.319 14:40:46 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.319 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.319 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.319 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.319 14:40:46 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.319 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.319 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.319 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.319 14:40:46 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.319 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.319 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.319 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.319 14:40:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.319 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.319 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.319 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.319 14:40:46 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.319 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.319 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.319 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.319 14:40:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.319 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.320 14:40:46 -- setup/common.sh@33 -- # echo 0 00:04:04.320 14:40:46 -- setup/common.sh@33 -- # return 0 00:04:04.320 14:40:46 -- setup/hugepages.sh@97 -- # anon=0 00:04:04.320 14:40:46 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.320 14:40:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.320 14:40:46 -- setup/common.sh@18 -- # local node= 00:04:04.320 14:40:46 -- setup/common.sh@19 -- # local var val 00:04:04.320 14:40:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.320 14:40:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.320 14:40:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.320 14:40:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.320 14:40:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.320 14:40:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109377592 kB' 'MemAvailable: 112906580 kB' 'Buffers: 4124 kB' 'Cached: 10404872 kB' 'SwapCached: 0 kB' 'Active: 7514788 kB' 'Inactive: 3515716 kB' 'Active(anon): 6824392 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624484 kB' 'Mapped: 172064 kB' 'Shmem: 6202884 kB' 'KReclaimable: 294416 kB' 'Slab: 1064092 kB' 'SReclaimable: 294416 kB' 'SUnreclaim: 769676 kB' 'KernelStack: 26976 kB' 'PageTables: 8472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8202088 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234588 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.320 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.320 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.321 14:40:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.321 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.585 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.585 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.586 14:40:46 -- setup/common.sh@33 -- # echo 0 00:04:04.586 14:40:46 -- setup/common.sh@33 -- # return 0 00:04:04.586 14:40:46 -- setup/hugepages.sh@99 -- # surp=0 00:04:04.586 14:40:46 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.586 14:40:46 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.586 14:40:46 -- setup/common.sh@18 -- # local node= 00:04:04.586 14:40:46 -- setup/common.sh@19 -- # local var val 00:04:04.586 14:40:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.586 14:40:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.586 14:40:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.586 14:40:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.586 14:40:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.586 14:40:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109378448 kB' 'MemAvailable: 112907436 kB' 'Buffers: 4124 kB' 'Cached: 10404872 kB' 'SwapCached: 0 kB' 'Active: 7514120 kB' 'Inactive: 3515716 kB' 'Active(anon): 6823724 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624244 kB' 'Mapped: 171984 kB' 'Shmem: 6202884 kB' 'KReclaimable: 294416 kB' 'Slab: 1064072 kB' 'SReclaimable: 294416 kB' 'SUnreclaim: 769656 kB' 'KernelStack: 26960 kB' 'PageTables: 8396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8202104 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234588 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:04:04.586 14:40:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.586 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.586 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.586 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.586 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.586 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.586 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.586 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.586 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.586 14:40:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.586 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.586 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.587 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.587 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.588 14:40:47 -- setup/common.sh@33 -- # echo 0 00:04:04.588 14:40:47 -- setup/common.sh@33 -- # return 0 00:04:04.588 14:40:47 -- setup/hugepages.sh@100 -- # resv=0 00:04:04.588 14:40:47 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:04.588 nr_hugepages=1024 00:04:04.588 14:40:47 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.588 resv_hugepages=0 00:04:04.588 14:40:47 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.588 surplus_hugepages=0 00:04:04.588 14:40:47 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.588 anon_hugepages=0 00:04:04.588 14:40:47 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.588 14:40:47 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:04.588 14:40:47 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.588 14:40:47 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.588 14:40:47 -- setup/common.sh@18 -- # local node= 00:04:04.588 14:40:47 -- setup/common.sh@19 -- # local var val 00:04:04.588 14:40:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.588 14:40:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.588 14:40:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.588 14:40:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.588 14:40:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.588 14:40:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109378512 kB' 'MemAvailable: 112907500 kB' 'Buffers: 4124 kB' 'Cached: 10404876 kB' 'SwapCached: 0 kB' 'Active: 7514268 kB' 'Inactive: 3515716 kB' 'Active(anon): 6823872 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624388 kB' 'Mapped: 171984 kB' 'Shmem: 6202888 kB' 'KReclaimable: 294416 kB' 'Slab: 1064072 kB' 'SReclaimable: 294416 kB' 'SUnreclaim: 769656 kB' 'KernelStack: 26944 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8202116 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234604 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.588 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.588 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.589 14:40:47 -- setup/common.sh@33 -- # echo 1024 00:04:04.589 14:40:47 -- setup/common.sh@33 -- # return 0 00:04:04.589 14:40:47 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.589 14:40:47 -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.589 14:40:47 -- setup/hugepages.sh@27 -- # local node 00:04:04.589 14:40:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.589 14:40:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:04.589 14:40:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.589 14:40:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:04.589 14:40:47 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:04.589 14:40:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.589 14:40:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.589 14:40:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.589 14:40:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.589 14:40:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.589 14:40:47 -- setup/common.sh@18 -- # local node=0 00:04:04.589 14:40:47 -- setup/common.sh@19 -- # local var val 00:04:04.589 14:40:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.589 14:40:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.589 14:40:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.589 14:40:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.589 14:40:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.589 14:40:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58871000 kB' 'MemUsed: 6788008 kB' 'SwapCached: 0 kB' 'Active: 2531404 kB' 'Inactive: 106348 kB' 'Active(anon): 2221884 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 106348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2479536 kB' 'Mapped: 96600 kB' 'AnonPages: 161328 kB' 'Shmem: 2063668 kB' 'KernelStack: 12552 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159296 kB' 'Slab: 530408 kB' 'SReclaimable: 159296 kB' 'SUnreclaim: 371112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.589 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.589 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # continue 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.590 14:40:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.590 14:40:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.590 14:40:47 -- setup/common.sh@33 -- # echo 0 00:04:04.590 14:40:47 -- setup/common.sh@33 -- # return 0 00:04:04.590 14:40:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.590 14:40:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.590 14:40:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.590 14:40:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.590 14:40:47 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:04.590 node0=1024 expecting 1024 00:04:04.590 14:40:47 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:04.590 14:40:47 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:04.590 14:40:47 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:04.590 14:40:47 -- setup/hugepages.sh@202 -- # setup output 00:04:04.590 14:40:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.590 14:40:47 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:07.887 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:07.887 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:07.887 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:07.887 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:07.887 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:07.887 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:07.887 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:07.887 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:07.887 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:07.887 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:07.887 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:07.887 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:07.887 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:07.887 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:07.887 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:07.887 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:07.887 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:08.148 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:08.148 14:40:50 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:08.148 14:40:50 -- setup/hugepages.sh@89 -- # local node 00:04:08.148 14:40:50 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:08.148 14:40:50 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:08.148 14:40:50 -- setup/hugepages.sh@92 -- # local surp 00:04:08.148 14:40:50 -- setup/hugepages.sh@93 -- # local resv 00:04:08.148 14:40:50 -- setup/hugepages.sh@94 -- # local anon 00:04:08.148 14:40:50 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:08.148 14:40:50 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:08.148 14:40:50 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:08.148 14:40:50 -- setup/common.sh@18 -- # local node= 00:04:08.148 14:40:50 -- setup/common.sh@19 -- # local var val 00:04:08.148 14:40:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.148 14:40:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.148 14:40:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.148 14:40:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.148 14:40:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.148 14:40:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.148 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.148 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109382380 kB' 'MemAvailable: 112911368 kB' 'Buffers: 4124 kB' 'Cached: 10404988 kB' 'SwapCached: 0 kB' 'Active: 7518496 kB' 'Inactive: 3515716 kB' 'Active(anon): 6828100 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 628416 kB' 'Mapped: 172048 kB' 'Shmem: 6203000 kB' 'KReclaimable: 294416 kB' 'Slab: 1063888 kB' 'SReclaimable: 294416 kB' 'SUnreclaim: 769472 kB' 'KernelStack: 26896 kB' 'PageTables: 8280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8204124 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234732 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.149 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.149 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.150 14:40:50 -- setup/common.sh@33 -- # echo 0 00:04:08.150 14:40:50 -- setup/common.sh@33 -- # return 0 00:04:08.150 14:40:50 -- setup/hugepages.sh@97 -- # anon=0 00:04:08.150 14:40:50 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:08.150 14:40:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.150 14:40:50 -- setup/common.sh@18 -- # local node= 00:04:08.150 14:40:50 -- setup/common.sh@19 -- # local var val 00:04:08.150 14:40:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.150 14:40:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.150 14:40:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.150 14:40:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.150 14:40:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.150 14:40:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109379820 kB' 'MemAvailable: 112908808 kB' 'Buffers: 4124 kB' 'Cached: 10404992 kB' 'SwapCached: 0 kB' 'Active: 7518892 kB' 'Inactive: 3515716 kB' 'Active(anon): 6828496 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 629400 kB' 'Mapped: 171996 kB' 'Shmem: 6203004 kB' 'KReclaimable: 294416 kB' 'Slab: 1063872 kB' 'SReclaimable: 294416 kB' 'SUnreclaim: 769456 kB' 'KernelStack: 26944 kB' 'PageTables: 8568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8208952 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234604 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.150 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.150 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.151 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.151 14:40:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.151 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.151 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.151 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.151 14:40:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.151 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.151 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.151 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.151 14:40:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.414 14:40:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.414 14:40:50 -- setup/common.sh@33 -- # echo 0 00:04:08.414 14:40:50 -- setup/common.sh@33 -- # return 0 00:04:08.414 14:40:50 -- setup/hugepages.sh@99 -- # surp=0 00:04:08.414 14:40:50 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:08.414 14:40:50 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:08.414 14:40:50 -- setup/common.sh@18 -- # local node= 00:04:08.414 14:40:50 -- setup/common.sh@19 -- # local var val 00:04:08.414 14:40:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.414 14:40:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.414 14:40:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.414 14:40:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.414 14:40:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.414 14:40:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.414 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109389184 kB' 'MemAvailable: 112918156 kB' 'Buffers: 4124 kB' 'Cached: 10405004 kB' 'SwapCached: 0 kB' 'Active: 7517712 kB' 'Inactive: 3515716 kB' 'Active(anon): 6827316 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 627744 kB' 'Mapped: 172000 kB' 'Shmem: 6203016 kB' 'KReclaimable: 294384 kB' 'Slab: 1063804 kB' 'SReclaimable: 294384 kB' 'SUnreclaim: 769420 kB' 'KernelStack: 26912 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8203992 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234572 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.415 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.415 14:40:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.416 14:40:50 -- setup/common.sh@33 -- # echo 0 00:04:08.416 14:40:50 -- setup/common.sh@33 -- # return 0 00:04:08.416 14:40:50 -- setup/hugepages.sh@100 -- # resv=0 00:04:08.416 14:40:50 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:08.416 nr_hugepages=1024 00:04:08.416 14:40:50 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:08.416 resv_hugepages=0 00:04:08.416 14:40:50 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:08.416 surplus_hugepages=0 00:04:08.416 14:40:50 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:08.416 anon_hugepages=0 00:04:08.416 14:40:50 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.416 14:40:50 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:08.416 14:40:50 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:08.416 14:40:50 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:08.416 14:40:50 -- setup/common.sh@18 -- # local node= 00:04:08.416 14:40:50 -- setup/common.sh@19 -- # local var val 00:04:08.416 14:40:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.416 14:40:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.416 14:40:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.416 14:40:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.416 14:40:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.416 14:40:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.416 14:40:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338868 kB' 'MemFree: 109390616 kB' 'MemAvailable: 112919588 kB' 'Buffers: 4124 kB' 'Cached: 10405016 kB' 'SwapCached: 0 kB' 'Active: 7517720 kB' 'Inactive: 3515716 kB' 'Active(anon): 6827324 kB' 'Inactive(anon): 0 kB' 'Active(file): 690396 kB' 'Inactive(file): 3515716 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 627840 kB' 'Mapped: 172008 kB' 'Shmem: 6203028 kB' 'KReclaimable: 294384 kB' 'Slab: 1063804 kB' 'SReclaimable: 294384 kB' 'SUnreclaim: 769420 kB' 'KernelStack: 27008 kB' 'PageTables: 8168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8205396 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234652 kB' 'VmallocChunk: 0 kB' 'Percpu: 106560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3618164 kB' 'DirectMap2M: 42199040 kB' 'DirectMap1G: 90177536 kB' 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.416 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.416 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.417 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.417 14:40:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.417 14:40:50 -- setup/common.sh@33 -- # echo 1024 00:04:08.417 14:40:50 -- setup/common.sh@33 -- # return 0 00:04:08.417 14:40:50 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.417 14:40:50 -- setup/hugepages.sh@112 -- # get_nodes 00:04:08.417 14:40:50 -- setup/hugepages.sh@27 -- # local node 00:04:08.417 14:40:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.417 14:40:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:08.417 14:40:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.417 14:40:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:08.417 14:40:50 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:08.417 14:40:50 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:08.417 14:40:50 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.417 14:40:50 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.417 14:40:50 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:08.417 14:40:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.417 14:40:50 -- setup/common.sh@18 -- # local node=0 00:04:08.417 14:40:50 -- setup/common.sh@19 -- # local var val 00:04:08.417 14:40:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.417 14:40:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.417 14:40:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:08.418 14:40:50 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:08.418 14:40:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.418 14:40:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58883264 kB' 'MemUsed: 6775744 kB' 'SwapCached: 0 kB' 'Active: 2532136 kB' 'Inactive: 106348 kB' 'Active(anon): 2222616 kB' 'Inactive(anon): 0 kB' 'Active(file): 309520 kB' 'Inactive(file): 106348 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2479556 kB' 'Mapped: 96612 kB' 'AnonPages: 161984 kB' 'Shmem: 2063688 kB' 'KernelStack: 12568 kB' 'PageTables: 3976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 159296 kB' 'Slab: 530264 kB' 'SReclaimable: 159296 kB' 'SUnreclaim: 370968 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.418 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.418 14:40:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.419 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.419 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.419 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.419 14:40:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.419 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.419 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.419 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.419 14:40:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.419 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.419 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.419 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.419 14:40:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.419 14:40:50 -- setup/common.sh@32 -- # continue 00:04:08.419 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.419 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.419 14:40:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.419 14:40:50 -- setup/common.sh@33 -- # echo 0 00:04:08.419 14:40:50 -- setup/common.sh@33 -- # return 0 00:04:08.419 14:40:50 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.419 14:40:50 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.419 14:40:50 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.419 14:40:50 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.419 14:40:50 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:08.419 node0=1024 expecting 1024 00:04:08.419 14:40:50 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:08.419 00:04:08.419 real 0m7.675s 00:04:08.419 user 0m3.062s 00:04:08.419 sys 0m4.713s 00:04:08.419 14:40:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:08.419 14:40:50 -- common/autotest_common.sh@10 -- # set +x 00:04:08.419 ************************************ 00:04:08.419 END TEST no_shrink_alloc 00:04:08.419 ************************************ 00:04:08.419 14:40:50 -- setup/hugepages.sh@217 -- # clear_hp 00:04:08.419 14:40:50 -- setup/hugepages.sh@37 -- # local node hp 00:04:08.419 14:40:50 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:08.419 14:40:50 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.419 14:40:50 -- setup/hugepages.sh@41 -- # echo 0 00:04:08.419 14:40:50 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.419 14:40:50 -- setup/hugepages.sh@41 -- # echo 0 00:04:08.419 14:40:50 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:08.419 14:40:50 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.419 14:40:50 -- setup/hugepages.sh@41 -- # echo 0 00:04:08.419 14:40:50 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.419 14:40:50 -- setup/hugepages.sh@41 -- # echo 0 00:04:08.419 14:40:50 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:08.419 14:40:50 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:08.419 00:04:08.419 real 0m28.543s 00:04:08.419 user 0m11.509s 00:04:08.419 sys 0m17.280s 00:04:08.419 14:40:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:08.419 14:40:50 -- common/autotest_common.sh@10 -- # set +x 00:04:08.419 ************************************ 00:04:08.419 END TEST hugepages 00:04:08.419 ************************************ 00:04:08.419 14:40:51 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:08.419 14:40:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:08.419 14:40:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:08.419 14:40:51 -- common/autotest_common.sh@10 -- # set +x 00:04:08.680 ************************************ 00:04:08.680 START TEST driver 00:04:08.680 ************************************ 00:04:08.680 14:40:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:08.680 * Looking for test storage... 00:04:08.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:08.680 14:40:51 -- setup/driver.sh@68 -- # setup reset 00:04:08.680 14:40:51 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.680 14:40:51 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:13.964 14:40:56 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:13.964 14:40:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:13.964 14:40:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:13.964 14:40:56 -- common/autotest_common.sh@10 -- # set +x 00:04:13.964 ************************************ 00:04:13.964 START TEST guess_driver 00:04:13.964 ************************************ 00:04:13.964 14:40:56 -- common/autotest_common.sh@1111 -- # guess_driver 00:04:13.964 14:40:56 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:13.964 14:40:56 -- setup/driver.sh@47 -- # local fail=0 00:04:13.964 14:40:56 -- setup/driver.sh@49 -- # pick_driver 00:04:13.964 14:40:56 -- setup/driver.sh@36 -- # vfio 00:04:13.964 14:40:56 -- setup/driver.sh@21 -- # local iommu_grups 00:04:13.964 14:40:56 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:13.964 14:40:56 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:13.964 14:40:56 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:13.964 14:40:56 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:13.964 14:40:56 -- setup/driver.sh@29 -- # (( 322 > 0 )) 00:04:13.964 14:40:56 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:13.964 14:40:56 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:13.964 14:40:56 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:13.964 14:40:56 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:13.964 14:40:56 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:13.964 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:13.964 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:13.964 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:13.964 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:13.964 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:13.964 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:13.964 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:13.964 14:40:56 -- setup/driver.sh@30 -- # return 0 00:04:13.964 14:40:56 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:13.964 14:40:56 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:13.964 14:40:56 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:13.964 14:40:56 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:13.964 Looking for driver=vfio-pci 00:04:13.964 14:40:56 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.964 14:40:56 -- setup/driver.sh@45 -- # setup output config 00:04:13.964 14:40:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.964 14:40:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:17.263 14:40:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.263 14:40:59 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.263 14:40:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.263 14:40:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.263 14:40:59 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.263 14:40:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.263 14:40:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.263 14:40:59 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.263 14:40:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.263 14:40:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.263 14:40:59 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.263 14:40:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.263 14:40:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.263 14:40:59 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.263 14:40:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.263 14:40:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.263 14:40:59 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.263 14:40:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.263 14:40:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.263 14:40:59 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.263 14:40:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.263 14:40:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.263 14:40:59 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.263 14:40:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.263 14:40:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.263 14:40:59 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.263 14:40:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.263 14:40:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.263 14:40:59 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.263 14:40:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.263 14:40:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.263 14:40:59 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.263 14:40:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.263 14:40:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.263 14:40:59 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.263 14:40:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.524 14:40:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.524 14:40:59 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.524 14:40:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.524 14:40:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.524 14:40:59 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.524 14:40:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.524 14:40:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.524 14:40:59 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.524 14:40:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.524 14:40:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.524 14:40:59 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.524 14:40:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.524 14:41:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.524 14:41:00 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.524 14:41:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.785 14:41:00 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:17.785 14:41:00 -- setup/driver.sh@65 -- # setup reset 00:04:17.785 14:41:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:17.785 14:41:00 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:23.082 00:04:23.082 real 0m8.866s 00:04:23.082 user 0m2.963s 00:04:23.082 sys 0m5.117s 00:04:23.082 14:41:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:23.082 14:41:05 -- common/autotest_common.sh@10 -- # set +x 00:04:23.082 ************************************ 00:04:23.082 END TEST guess_driver 00:04:23.082 ************************************ 00:04:23.082 00:04:23.082 real 0m14.066s 00:04:23.082 user 0m4.530s 00:04:23.082 sys 0m7.953s 00:04:23.082 14:41:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:23.082 14:41:05 -- common/autotest_common.sh@10 -- # set +x 00:04:23.082 ************************************ 00:04:23.082 END TEST driver 00:04:23.082 ************************************ 00:04:23.082 14:41:05 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:23.082 14:41:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:23.082 14:41:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:23.082 14:41:05 -- common/autotest_common.sh@10 -- # set +x 00:04:23.082 ************************************ 00:04:23.082 START TEST devices 00:04:23.082 ************************************ 00:04:23.082 14:41:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:23.082 * Looking for test storage... 00:04:23.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:23.082 14:41:05 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:23.082 14:41:05 -- setup/devices.sh@192 -- # setup reset 00:04:23.082 14:41:05 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:23.082 14:41:05 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:27.293 14:41:09 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:27.293 14:41:09 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:27.293 14:41:09 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:27.293 14:41:09 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:27.293 14:41:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:27.293 14:41:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:27.293 14:41:09 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:27.293 14:41:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:27.293 14:41:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:27.293 14:41:09 -- setup/devices.sh@196 -- # blocks=() 00:04:27.293 14:41:09 -- setup/devices.sh@196 -- # declare -a blocks 00:04:27.293 14:41:09 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:27.293 14:41:09 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:27.293 14:41:09 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:27.293 14:41:09 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:27.293 14:41:09 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:27.293 14:41:09 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:27.293 14:41:09 -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:27.293 14:41:09 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:27.293 14:41:09 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:27.293 14:41:09 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:27.293 14:41:09 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:27.293 No valid GPT data, bailing 00:04:27.293 14:41:09 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:27.293 14:41:09 -- scripts/common.sh@391 -- # pt= 00:04:27.293 14:41:09 -- scripts/common.sh@392 -- # return 1 00:04:27.293 14:41:09 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:27.293 14:41:09 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:27.293 14:41:09 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:27.293 14:41:09 -- setup/common.sh@80 -- # echo 1920383410176 00:04:27.293 14:41:09 -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:27.293 14:41:09 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:27.293 14:41:09 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:27.293 14:41:09 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:27.293 14:41:09 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:27.293 14:41:09 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:27.293 14:41:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:27.293 14:41:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:27.293 14:41:09 -- common/autotest_common.sh@10 -- # set +x 00:04:27.293 ************************************ 00:04:27.293 START TEST nvme_mount 00:04:27.293 ************************************ 00:04:27.293 14:41:09 -- common/autotest_common.sh@1111 -- # nvme_mount 00:04:27.293 14:41:09 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:27.293 14:41:09 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:27.293 14:41:09 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.293 14:41:09 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:27.293 14:41:09 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:27.293 14:41:09 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:27.293 14:41:09 -- setup/common.sh@40 -- # local part_no=1 00:04:27.293 14:41:09 -- setup/common.sh@41 -- # local size=1073741824 00:04:27.293 14:41:09 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:27.293 14:41:09 -- setup/common.sh@44 -- # parts=() 00:04:27.293 14:41:09 -- setup/common.sh@44 -- # local parts 00:04:27.293 14:41:09 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:27.293 14:41:09 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.293 14:41:09 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:27.293 14:41:09 -- setup/common.sh@46 -- # (( part++ )) 00:04:27.293 14:41:09 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.293 14:41:09 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:27.293 14:41:09 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:27.293 14:41:09 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:27.864 Creating new GPT entries in memory. 00:04:27.864 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:27.864 other utilities. 00:04:27.864 14:41:10 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:27.864 14:41:10 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.864 14:41:10 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:27.864 14:41:10 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:27.864 14:41:10 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:28.806 Creating new GPT entries in memory. 00:04:28.806 The operation has completed successfully. 00:04:28.806 14:41:11 -- setup/common.sh@57 -- # (( part++ )) 00:04:28.806 14:41:11 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:28.806 14:41:11 -- setup/common.sh@62 -- # wait 840169 00:04:29.066 14:41:11 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.066 14:41:11 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:29.066 14:41:11 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.066 14:41:11 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:29.066 14:41:11 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:29.066 14:41:11 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.066 14:41:11 -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.066 14:41:11 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:29.066 14:41:11 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:29.066 14:41:11 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.066 14:41:11 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.066 14:41:11 -- setup/devices.sh@53 -- # local found=0 00:04:29.066 14:41:11 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:29.066 14:41:11 -- setup/devices.sh@56 -- # : 00:04:29.066 14:41:11 -- setup/devices.sh@59 -- # local pci status 00:04:29.066 14:41:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.066 14:41:11 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:29.066 14:41:11 -- setup/devices.sh@47 -- # setup output config 00:04:29.066 14:41:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.066 14:41:11 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:32.364 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.364 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.364 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.364 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.364 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.364 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.364 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.364 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.364 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.364 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.364 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.364 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.364 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.365 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.365 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.365 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.365 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.365 14:41:14 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:32.365 14:41:14 -- setup/devices.sh@63 -- # found=1 00:04:32.365 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.365 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.365 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.365 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.365 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.365 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.365 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.365 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.365 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.365 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.365 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.365 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.365 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.365 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.365 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.365 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.365 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.625 14:41:15 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.625 14:41:15 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:32.625 14:41:15 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.625 14:41:15 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:32.625 14:41:15 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:32.625 14:41:15 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:32.625 14:41:15 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.625 14:41:15 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.625 14:41:15 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:32.625 14:41:15 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:32.625 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:32.625 14:41:15 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:32.625 14:41:15 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:32.885 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:32.885 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:32.885 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:32.885 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:32.886 14:41:15 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:32.886 14:41:15 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:32.886 14:41:15 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.886 14:41:15 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:32.886 14:41:15 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:32.886 14:41:15 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.146 14:41:15 -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:33.146 14:41:15 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:33.146 14:41:15 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:33.146 14:41:15 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.146 14:41:15 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:33.146 14:41:15 -- setup/devices.sh@53 -- # local found=0 00:04:33.146 14:41:15 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:33.146 14:41:15 -- setup/devices.sh@56 -- # : 00:04:33.146 14:41:15 -- setup/devices.sh@59 -- # local pci status 00:04:33.146 14:41:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.146 14:41:15 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:33.146 14:41:15 -- setup/devices.sh@47 -- # setup output config 00:04:33.146 14:41:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.146 14:41:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:36.444 14:41:18 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.444 14:41:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.444 14:41:18 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.444 14:41:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.444 14:41:18 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.444 14:41:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.444 14:41:18 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.444 14:41:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.444 14:41:18 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.444 14:41:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.444 14:41:18 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.444 14:41:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.444 14:41:18 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.444 14:41:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.444 14:41:18 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.444 14:41:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.444 14:41:18 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.444 14:41:18 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:36.444 14:41:18 -- setup/devices.sh@63 -- # found=1 00:04:36.444 14:41:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.444 14:41:18 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.444 14:41:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.444 14:41:18 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.444 14:41:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.444 14:41:18 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.444 14:41:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.444 14:41:18 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.444 14:41:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.444 14:41:18 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.444 14:41:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.444 14:41:18 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.444 14:41:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.444 14:41:18 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.444 14:41:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.444 14:41:18 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.444 14:41:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.705 14:41:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:36.705 14:41:19 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:36.705 14:41:19 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.705 14:41:19 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:36.705 14:41:19 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:36.705 14:41:19 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.705 14:41:19 -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:36.705 14:41:19 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:36.705 14:41:19 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:36.705 14:41:19 -- setup/devices.sh@50 -- # local mount_point= 00:04:36.705 14:41:19 -- setup/devices.sh@51 -- # local test_file= 00:04:36.705 14:41:19 -- setup/devices.sh@53 -- # local found=0 00:04:36.705 14:41:19 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:36.705 14:41:19 -- setup/devices.sh@59 -- # local pci status 00:04:36.705 14:41:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.705 14:41:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:36.705 14:41:19 -- setup/devices.sh@47 -- # setup output config 00:04:36.705 14:41:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.705 14:41:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:40.059 14:41:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.059 14:41:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.059 14:41:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.059 14:41:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.059 14:41:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.059 14:41:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.059 14:41:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.059 14:41:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.059 14:41:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.059 14:41:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.059 14:41:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.059 14:41:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.059 14:41:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.059 14:41:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.059 14:41:22 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.059 14:41:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.059 14:41:22 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.059 14:41:22 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:40.059 14:41:22 -- setup/devices.sh@63 -- # found=1 00:04:40.059 14:41:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.059 14:41:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.059 14:41:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.059 14:41:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.059 14:41:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.059 14:41:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.059 14:41:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.059 14:41:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.059 14:41:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.059 14:41:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.059 14:41:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.059 14:41:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.059 14:41:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.059 14:41:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.059 14:41:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.059 14:41:22 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:40.059 14:41:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.320 14:41:22 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:40.320 14:41:22 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:40.320 14:41:22 -- setup/devices.sh@68 -- # return 0 00:04:40.320 14:41:22 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:40.320 14:41:22 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.320 14:41:22 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:40.320 14:41:22 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:40.320 14:41:22 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:40.320 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:40.320 00:04:40.320 real 0m13.331s 00:04:40.320 user 0m3.936s 00:04:40.320 sys 0m7.210s 00:04:40.320 14:41:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:40.320 14:41:22 -- common/autotest_common.sh@10 -- # set +x 00:04:40.320 ************************************ 00:04:40.320 END TEST nvme_mount 00:04:40.320 ************************************ 00:04:40.320 14:41:22 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:40.320 14:41:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:40.320 14:41:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:40.320 14:41:22 -- common/autotest_common.sh@10 -- # set +x 00:04:40.320 ************************************ 00:04:40.320 START TEST dm_mount 00:04:40.320 ************************************ 00:04:40.320 14:41:22 -- common/autotest_common.sh@1111 -- # dm_mount 00:04:40.320 14:41:22 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:40.320 14:41:22 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:40.320 14:41:22 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:40.320 14:41:22 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:40.320 14:41:22 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:40.320 14:41:22 -- setup/common.sh@40 -- # local part_no=2 00:04:40.320 14:41:22 -- setup/common.sh@41 -- # local size=1073741824 00:04:40.320 14:41:22 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:40.320 14:41:22 -- setup/common.sh@44 -- # parts=() 00:04:40.320 14:41:22 -- setup/common.sh@44 -- # local parts 00:04:40.320 14:41:22 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:40.320 14:41:22 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.320 14:41:22 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:40.320 14:41:22 -- setup/common.sh@46 -- # (( part++ )) 00:04:40.320 14:41:22 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.320 14:41:22 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:40.320 14:41:22 -- setup/common.sh@46 -- # (( part++ )) 00:04:40.320 14:41:22 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.320 14:41:22 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:40.320 14:41:22 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:40.320 14:41:22 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:41.706 Creating new GPT entries in memory. 00:04:41.706 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:41.706 other utilities. 00:04:41.706 14:41:23 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:41.706 14:41:23 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.706 14:41:23 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:41.706 14:41:23 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:41.706 14:41:23 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:42.652 Creating new GPT entries in memory. 00:04:42.652 The operation has completed successfully. 00:04:42.652 14:41:24 -- setup/common.sh@57 -- # (( part++ )) 00:04:42.652 14:41:24 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.652 14:41:24 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:42.652 14:41:24 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:42.652 14:41:24 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:43.593 The operation has completed successfully. 00:04:43.593 14:41:26 -- setup/common.sh@57 -- # (( part++ )) 00:04:43.593 14:41:26 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:43.593 14:41:26 -- setup/common.sh@62 -- # wait 845439 00:04:43.593 14:41:26 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:43.593 14:41:26 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.593 14:41:26 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:43.593 14:41:26 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:43.593 14:41:26 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:43.593 14:41:26 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:43.593 14:41:26 -- setup/devices.sh@161 -- # break 00:04:43.593 14:41:26 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:43.593 14:41:26 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:43.593 14:41:26 -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:04:43.593 14:41:26 -- setup/devices.sh@166 -- # dm=dm-1 00:04:43.593 14:41:26 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:04:43.593 14:41:26 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:04:43.593 14:41:26 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.593 14:41:26 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:43.593 14:41:26 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.593 14:41:26 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:43.593 14:41:26 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:43.593 14:41:26 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.593 14:41:26 -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:43.593 14:41:26 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:43.593 14:41:26 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:43.593 14:41:26 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.593 14:41:26 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:43.593 14:41:26 -- setup/devices.sh@53 -- # local found=0 00:04:43.593 14:41:26 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:43.593 14:41:26 -- setup/devices.sh@56 -- # : 00:04:43.593 14:41:26 -- setup/devices.sh@59 -- # local pci status 00:04:43.593 14:41:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.593 14:41:26 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:43.593 14:41:26 -- setup/devices.sh@47 -- # setup output config 00:04:43.593 14:41:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.593 14:41:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:46.894 14:41:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.894 14:41:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.894 14:41:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.894 14:41:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.894 14:41:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.894 14:41:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.894 14:41:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.894 14:41:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.894 14:41:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.894 14:41:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.894 14:41:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.894 14:41:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.894 14:41:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.894 14:41:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.894 14:41:29 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.894 14:41:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.894 14:41:29 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.894 14:41:29 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:46.894 14:41:29 -- setup/devices.sh@63 -- # found=1 00:04:46.894 14:41:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.894 14:41:29 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.894 14:41:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.894 14:41:29 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.894 14:41:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.894 14:41:29 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.894 14:41:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.894 14:41:29 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.894 14:41:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.894 14:41:29 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.894 14:41:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.894 14:41:29 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.894 14:41:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.894 14:41:29 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.894 14:41:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.894 14:41:29 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.894 14:41:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.156 14:41:29 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.156 14:41:29 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:47.156 14:41:29 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:47.156 14:41:29 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:47.156 14:41:29 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:47.156 14:41:29 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:47.156 14:41:29 -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:04:47.156 14:41:29 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:47.156 14:41:29 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:04:47.156 14:41:29 -- setup/devices.sh@50 -- # local mount_point= 00:04:47.156 14:41:29 -- setup/devices.sh@51 -- # local test_file= 00:04:47.156 14:41:29 -- setup/devices.sh@53 -- # local found=0 00:04:47.156 14:41:29 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:47.156 14:41:29 -- setup/devices.sh@59 -- # local pci status 00:04:47.156 14:41:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.156 14:41:29 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:47.156 14:41:29 -- setup/devices.sh@47 -- # setup output config 00:04:47.156 14:41:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.156 14:41:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:50.458 14:41:32 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.458 14:41:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.458 14:41:32 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.458 14:41:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.458 14:41:32 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.458 14:41:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.458 14:41:32 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.458 14:41:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.458 14:41:32 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.458 14:41:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.458 14:41:32 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.458 14:41:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.458 14:41:32 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.458 14:41:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.458 14:41:32 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.458 14:41:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.458 14:41:32 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.458 14:41:32 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:04:50.458 14:41:32 -- setup/devices.sh@63 -- # found=1 00:04:50.458 14:41:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.458 14:41:32 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.458 14:41:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.458 14:41:32 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.458 14:41:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.458 14:41:32 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.458 14:41:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.458 14:41:32 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.458 14:41:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.458 14:41:32 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.458 14:41:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.458 14:41:32 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.458 14:41:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.458 14:41:32 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.458 14:41:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.458 14:41:32 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.458 14:41:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.718 14:41:33 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:50.718 14:41:33 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:50.718 14:41:33 -- setup/devices.sh@68 -- # return 0 00:04:50.718 14:41:33 -- setup/devices.sh@187 -- # cleanup_dm 00:04:50.718 14:41:33 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.718 14:41:33 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:50.718 14:41:33 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:50.718 14:41:33 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.718 14:41:33 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:50.718 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:50.718 14:41:33 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:50.718 14:41:33 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:50.718 00:04:50.718 real 0m10.331s 00:04:50.718 user 0m2.726s 00:04:50.718 sys 0m4.630s 00:04:50.718 14:41:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:50.719 14:41:33 -- common/autotest_common.sh@10 -- # set +x 00:04:50.719 ************************************ 00:04:50.719 END TEST dm_mount 00:04:50.719 ************************************ 00:04:50.719 14:41:33 -- setup/devices.sh@1 -- # cleanup 00:04:50.719 14:41:33 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:50.719 14:41:33 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.719 14:41:33 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.719 14:41:33 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:50.719 14:41:33 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:50.719 14:41:33 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:50.979 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:50.979 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:50.979 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:50.979 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:50.979 14:41:33 -- setup/devices.sh@12 -- # cleanup_dm 00:04:50.979 14:41:33 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.979 14:41:33 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:50.979 14:41:33 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.979 14:41:33 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:50.979 14:41:33 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:50.979 14:41:33 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:50.979 00:04:50.979 real 0m28.180s 00:04:50.979 user 0m8.215s 00:04:50.979 sys 0m14.584s 00:04:50.979 14:41:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:50.979 14:41:33 -- common/autotest_common.sh@10 -- # set +x 00:04:50.979 ************************************ 00:04:50.979 END TEST devices 00:04:50.979 ************************************ 00:04:50.979 00:04:50.979 real 1m37.467s 00:04:50.979 user 0m32.795s 00:04:50.979 sys 0m55.433s 00:04:50.979 14:41:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:50.979 14:41:33 -- common/autotest_common.sh@10 -- # set +x 00:04:50.979 ************************************ 00:04:50.979 END TEST setup.sh 00:04:50.979 ************************************ 00:04:51.240 14:41:33 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:54.543 Hugepages 00:04:54.543 node hugesize free / total 00:04:54.543 node0 1048576kB 0 / 0 00:04:54.543 node0 2048kB 2048 / 2048 00:04:54.543 node1 1048576kB 0 / 0 00:04:54.543 node1 2048kB 0 / 0 00:04:54.543 00:04:54.543 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:54.543 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:54.543 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:54.543 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:54.543 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:54.543 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:54.543 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:54.543 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:54.543 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:54.543 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:54.543 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:54.543 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:54.543 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:54.543 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:54.543 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:54.543 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:54.543 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:54.543 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:54.543 14:41:36 -- spdk/autotest.sh@130 -- # uname -s 00:04:54.543 14:41:36 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:54.543 14:41:36 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:54.543 14:41:36 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:57.848 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:57.848 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:57.848 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:57.848 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:57.848 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:57.848 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:57.848 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:57.848 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:57.848 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:57.848 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:57.848 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:57.848 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:57.848 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:57.848 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:57.848 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:58.108 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:00.018 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:00.018 14:41:42 -- common/autotest_common.sh@1518 -- # sleep 1 00:05:00.958 14:41:43 -- common/autotest_common.sh@1519 -- # bdfs=() 00:05:00.958 14:41:43 -- common/autotest_common.sh@1519 -- # local bdfs 00:05:00.958 14:41:43 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:00.958 14:41:43 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:00.958 14:41:43 -- common/autotest_common.sh@1499 -- # bdfs=() 00:05:00.958 14:41:43 -- common/autotest_common.sh@1499 -- # local bdfs 00:05:00.958 14:41:43 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:00.958 14:41:43 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:00.958 14:41:43 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:05:01.219 14:41:43 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:05:01.219 14:41:43 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:65:00.0 00:05:01.219 14:41:43 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:04.523 Waiting for block devices as requested 00:05:04.523 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:04.523 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:04.523 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:04.784 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:04.784 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:04.784 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:05.044 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:05.044 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:05.044 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:05.304 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:05.304 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:05.304 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:05.565 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:05.565 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:05.565 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:05.565 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:05.826 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:06.085 14:41:48 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:06.085 14:41:48 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:06.085 14:41:48 -- common/autotest_common.sh@1488 -- # grep 0000:65:00.0/nvme/nvme 00:05:06.085 14:41:48 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:05:06.085 14:41:48 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:06.085 14:41:48 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:06.085 14:41:48 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:06.085 14:41:48 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:05:06.085 14:41:48 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:06.085 14:41:48 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:06.085 14:41:48 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:06.085 14:41:48 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:06.085 14:41:48 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:06.085 14:41:48 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:05:06.085 14:41:48 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:06.085 14:41:48 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:06.085 14:41:48 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:06.085 14:41:48 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:06.085 14:41:48 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:06.085 14:41:48 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:06.085 14:41:48 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:06.085 14:41:48 -- common/autotest_common.sh@1543 -- # continue 00:05:06.085 14:41:48 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:06.085 14:41:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:06.085 14:41:48 -- common/autotest_common.sh@10 -- # set +x 00:05:06.085 14:41:48 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:06.085 14:41:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:06.085 14:41:48 -- common/autotest_common.sh@10 -- # set +x 00:05:06.085 14:41:48 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:09.386 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:09.386 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:09.386 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:09.645 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:09.645 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:09.645 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:09.646 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:09.646 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:09.646 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:09.646 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:09.646 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:09.646 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:09.646 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:09.646 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:09.646 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:09.646 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:09.646 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:10.214 14:41:52 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:10.214 14:41:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:10.214 14:41:52 -- common/autotest_common.sh@10 -- # set +x 00:05:10.214 14:41:52 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:10.214 14:41:52 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:05:10.214 14:41:52 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:05:10.214 14:41:52 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:10.214 14:41:52 -- common/autotest_common.sh@1563 -- # local bdfs 00:05:10.214 14:41:52 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:05:10.214 14:41:52 -- common/autotest_common.sh@1499 -- # bdfs=() 00:05:10.214 14:41:52 -- common/autotest_common.sh@1499 -- # local bdfs 00:05:10.214 14:41:52 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:10.214 14:41:52 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:10.214 14:41:52 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:05:10.214 14:41:52 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:05:10.214 14:41:52 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:65:00.0 00:05:10.214 14:41:52 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:05:10.214 14:41:52 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:10.214 14:41:52 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:05:10.214 14:41:52 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:10.214 14:41:52 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:05:10.214 14:41:52 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:05:10.214 14:41:52 -- common/autotest_common.sh@1579 -- # return 0 00:05:10.214 14:41:52 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:10.214 14:41:52 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:10.214 14:41:52 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:10.214 14:41:52 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:10.214 14:41:52 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:10.214 14:41:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:10.214 14:41:52 -- common/autotest_common.sh@10 -- # set +x 00:05:10.214 14:41:52 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:10.214 14:41:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.214 14:41:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.214 14:41:52 -- common/autotest_common.sh@10 -- # set +x 00:05:10.474 ************************************ 00:05:10.474 START TEST env 00:05:10.474 ************************************ 00:05:10.474 14:41:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:10.474 * Looking for test storage... 00:05:10.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:10.474 14:41:52 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:10.474 14:41:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.474 14:41:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.474 14:41:52 -- common/autotest_common.sh@10 -- # set +x 00:05:10.474 ************************************ 00:05:10.474 START TEST env_memory 00:05:10.474 ************************************ 00:05:10.474 14:41:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:10.474 00:05:10.474 00:05:10.474 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.474 http://cunit.sourceforge.net/ 00:05:10.474 00:05:10.474 00:05:10.474 Suite: memory 00:05:10.779 Test: alloc and free memory map ...[2024-04-26 14:41:53.179054] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:10.779 passed 00:05:10.779 Test: mem map translation ...[2024-04-26 14:41:53.204724] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:10.779 [2024-04-26 14:41:53.204753] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:10.779 [2024-04-26 14:41:53.204800] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:10.779 [2024-04-26 14:41:53.204808] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:10.779 passed 00:05:10.779 Test: mem map registration ...[2024-04-26 14:41:53.260209] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:10.779 [2024-04-26 14:41:53.260231] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:10.779 passed 00:05:10.779 Test: mem map adjacent registrations ...passed 00:05:10.779 00:05:10.779 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.779 suites 1 1 n/a 0 0 00:05:10.779 tests 4 4 4 0 0 00:05:10.779 asserts 152 152 152 0 n/a 00:05:10.779 00:05:10.779 Elapsed time = 0.193 seconds 00:05:10.779 00:05:10.779 real 0m0.207s 00:05:10.779 user 0m0.192s 00:05:10.779 sys 0m0.013s 00:05:10.779 14:41:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:10.779 14:41:53 -- common/autotest_common.sh@10 -- # set +x 00:05:10.779 ************************************ 00:05:10.779 END TEST env_memory 00:05:10.779 ************************************ 00:05:10.779 14:41:53 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:10.779 14:41:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.779 14:41:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.779 14:41:53 -- common/autotest_common.sh@10 -- # set +x 00:05:11.039 ************************************ 00:05:11.039 START TEST env_vtophys 00:05:11.039 ************************************ 00:05:11.039 14:41:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:11.039 EAL: lib.eal log level changed from notice to debug 00:05:11.039 EAL: Detected lcore 0 as core 0 on socket 0 00:05:11.039 EAL: Detected lcore 1 as core 1 on socket 0 00:05:11.039 EAL: Detected lcore 2 as core 2 on socket 0 00:05:11.039 EAL: Detected lcore 3 as core 3 on socket 0 00:05:11.039 EAL: Detected lcore 4 as core 4 on socket 0 00:05:11.039 EAL: Detected lcore 5 as core 5 on socket 0 00:05:11.039 EAL: Detected lcore 6 as core 6 on socket 0 00:05:11.039 EAL: Detected lcore 7 as core 7 on socket 0 00:05:11.039 EAL: Detected lcore 8 as core 8 on socket 0 00:05:11.039 EAL: Detected lcore 9 as core 9 on socket 0 00:05:11.039 EAL: Detected lcore 10 as core 10 on socket 0 00:05:11.039 EAL: Detected lcore 11 as core 11 on socket 0 00:05:11.039 EAL: Detected lcore 12 as core 12 on socket 0 00:05:11.039 EAL: Detected lcore 13 as core 13 on socket 0 00:05:11.040 EAL: Detected lcore 14 as core 14 on socket 0 00:05:11.040 EAL: Detected lcore 15 as core 15 on socket 0 00:05:11.040 EAL: Detected lcore 16 as core 16 on socket 0 00:05:11.040 EAL: Detected lcore 17 as core 17 on socket 0 00:05:11.040 EAL: Detected lcore 18 as core 18 on socket 0 00:05:11.040 EAL: Detected lcore 19 as core 19 on socket 0 00:05:11.040 EAL: Detected lcore 20 as core 20 on socket 0 00:05:11.040 EAL: Detected lcore 21 as core 21 on socket 0 00:05:11.040 EAL: Detected lcore 22 as core 22 on socket 0 00:05:11.040 EAL: Detected lcore 23 as core 23 on socket 0 00:05:11.040 EAL: Detected lcore 24 as core 24 on socket 0 00:05:11.040 EAL: Detected lcore 25 as core 25 on socket 0 00:05:11.040 EAL: Detected lcore 26 as core 26 on socket 0 00:05:11.040 EAL: Detected lcore 27 as core 27 on socket 0 00:05:11.040 EAL: Detected lcore 28 as core 28 on socket 0 00:05:11.040 EAL: Detected lcore 29 as core 29 on socket 0 00:05:11.040 EAL: Detected lcore 30 as core 30 on socket 0 00:05:11.040 EAL: Detected lcore 31 as core 31 on socket 0 00:05:11.040 EAL: Detected lcore 32 as core 32 on socket 0 00:05:11.040 EAL: Detected lcore 33 as core 33 on socket 0 00:05:11.040 EAL: Detected lcore 34 as core 34 on socket 0 00:05:11.040 EAL: Detected lcore 35 as core 35 on socket 0 00:05:11.040 EAL: Detected lcore 36 as core 0 on socket 1 00:05:11.040 EAL: Detected lcore 37 as core 1 on socket 1 00:05:11.040 EAL: Detected lcore 38 as core 2 on socket 1 00:05:11.040 EAL: Detected lcore 39 as core 3 on socket 1 00:05:11.040 EAL: Detected lcore 40 as core 4 on socket 1 00:05:11.040 EAL: Detected lcore 41 as core 5 on socket 1 00:05:11.040 EAL: Detected lcore 42 as core 6 on socket 1 00:05:11.040 EAL: Detected lcore 43 as core 7 on socket 1 00:05:11.040 EAL: Detected lcore 44 as core 8 on socket 1 00:05:11.040 EAL: Detected lcore 45 as core 9 on socket 1 00:05:11.040 EAL: Detected lcore 46 as core 10 on socket 1 00:05:11.040 EAL: Detected lcore 47 as core 11 on socket 1 00:05:11.040 EAL: Detected lcore 48 as core 12 on socket 1 00:05:11.040 EAL: Detected lcore 49 as core 13 on socket 1 00:05:11.040 EAL: Detected lcore 50 as core 14 on socket 1 00:05:11.040 EAL: Detected lcore 51 as core 15 on socket 1 00:05:11.040 EAL: Detected lcore 52 as core 16 on socket 1 00:05:11.040 EAL: Detected lcore 53 as core 17 on socket 1 00:05:11.040 EAL: Detected lcore 54 as core 18 on socket 1 00:05:11.040 EAL: Detected lcore 55 as core 19 on socket 1 00:05:11.040 EAL: Detected lcore 56 as core 20 on socket 1 00:05:11.040 EAL: Detected lcore 57 as core 21 on socket 1 00:05:11.040 EAL: Detected lcore 58 as core 22 on socket 1 00:05:11.040 EAL: Detected lcore 59 as core 23 on socket 1 00:05:11.040 EAL: Detected lcore 60 as core 24 on socket 1 00:05:11.040 EAL: Detected lcore 61 as core 25 on socket 1 00:05:11.040 EAL: Detected lcore 62 as core 26 on socket 1 00:05:11.040 EAL: Detected lcore 63 as core 27 on socket 1 00:05:11.040 EAL: Detected lcore 64 as core 28 on socket 1 00:05:11.040 EAL: Detected lcore 65 as core 29 on socket 1 00:05:11.040 EAL: Detected lcore 66 as core 30 on socket 1 00:05:11.040 EAL: Detected lcore 67 as core 31 on socket 1 00:05:11.040 EAL: Detected lcore 68 as core 32 on socket 1 00:05:11.040 EAL: Detected lcore 69 as core 33 on socket 1 00:05:11.040 EAL: Detected lcore 70 as core 34 on socket 1 00:05:11.040 EAL: Detected lcore 71 as core 35 on socket 1 00:05:11.040 EAL: Detected lcore 72 as core 0 on socket 0 00:05:11.040 EAL: Detected lcore 73 as core 1 on socket 0 00:05:11.040 EAL: Detected lcore 74 as core 2 on socket 0 00:05:11.040 EAL: Detected lcore 75 as core 3 on socket 0 00:05:11.040 EAL: Detected lcore 76 as core 4 on socket 0 00:05:11.040 EAL: Detected lcore 77 as core 5 on socket 0 00:05:11.040 EAL: Detected lcore 78 as core 6 on socket 0 00:05:11.040 EAL: Detected lcore 79 as core 7 on socket 0 00:05:11.040 EAL: Detected lcore 80 as core 8 on socket 0 00:05:11.040 EAL: Detected lcore 81 as core 9 on socket 0 00:05:11.040 EAL: Detected lcore 82 as core 10 on socket 0 00:05:11.040 EAL: Detected lcore 83 as core 11 on socket 0 00:05:11.040 EAL: Detected lcore 84 as core 12 on socket 0 00:05:11.040 EAL: Detected lcore 85 as core 13 on socket 0 00:05:11.040 EAL: Detected lcore 86 as core 14 on socket 0 00:05:11.040 EAL: Detected lcore 87 as core 15 on socket 0 00:05:11.040 EAL: Detected lcore 88 as core 16 on socket 0 00:05:11.040 EAL: Detected lcore 89 as core 17 on socket 0 00:05:11.040 EAL: Detected lcore 90 as core 18 on socket 0 00:05:11.040 EAL: Detected lcore 91 as core 19 on socket 0 00:05:11.040 EAL: Detected lcore 92 as core 20 on socket 0 00:05:11.040 EAL: Detected lcore 93 as core 21 on socket 0 00:05:11.040 EAL: Detected lcore 94 as core 22 on socket 0 00:05:11.040 EAL: Detected lcore 95 as core 23 on socket 0 00:05:11.040 EAL: Detected lcore 96 as core 24 on socket 0 00:05:11.040 EAL: Detected lcore 97 as core 25 on socket 0 00:05:11.040 EAL: Detected lcore 98 as core 26 on socket 0 00:05:11.040 EAL: Detected lcore 99 as core 27 on socket 0 00:05:11.040 EAL: Detected lcore 100 as core 28 on socket 0 00:05:11.040 EAL: Detected lcore 101 as core 29 on socket 0 00:05:11.040 EAL: Detected lcore 102 as core 30 on socket 0 00:05:11.040 EAL: Detected lcore 103 as core 31 on socket 0 00:05:11.040 EAL: Detected lcore 104 as core 32 on socket 0 00:05:11.040 EAL: Detected lcore 105 as core 33 on socket 0 00:05:11.040 EAL: Detected lcore 106 as core 34 on socket 0 00:05:11.040 EAL: Detected lcore 107 as core 35 on socket 0 00:05:11.040 EAL: Detected lcore 108 as core 0 on socket 1 00:05:11.040 EAL: Detected lcore 109 as core 1 on socket 1 00:05:11.040 EAL: Detected lcore 110 as core 2 on socket 1 00:05:11.040 EAL: Detected lcore 111 as core 3 on socket 1 00:05:11.040 EAL: Detected lcore 112 as core 4 on socket 1 00:05:11.040 EAL: Detected lcore 113 as core 5 on socket 1 00:05:11.040 EAL: Detected lcore 114 as core 6 on socket 1 00:05:11.040 EAL: Detected lcore 115 as core 7 on socket 1 00:05:11.040 EAL: Detected lcore 116 as core 8 on socket 1 00:05:11.040 EAL: Detected lcore 117 as core 9 on socket 1 00:05:11.040 EAL: Detected lcore 118 as core 10 on socket 1 00:05:11.040 EAL: Detected lcore 119 as core 11 on socket 1 00:05:11.040 EAL: Detected lcore 120 as core 12 on socket 1 00:05:11.040 EAL: Detected lcore 121 as core 13 on socket 1 00:05:11.040 EAL: Detected lcore 122 as core 14 on socket 1 00:05:11.040 EAL: Detected lcore 123 as core 15 on socket 1 00:05:11.040 EAL: Detected lcore 124 as core 16 on socket 1 00:05:11.040 EAL: Detected lcore 125 as core 17 on socket 1 00:05:11.040 EAL: Detected lcore 126 as core 18 on socket 1 00:05:11.040 EAL: Detected lcore 127 as core 19 on socket 1 00:05:11.040 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:11.040 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:11.040 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:11.040 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:11.040 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:11.040 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:11.040 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:11.040 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:11.040 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:11.040 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:11.040 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:11.040 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:11.040 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:11.040 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:11.040 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:11.040 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:11.040 EAL: Maximum logical cores by configuration: 128 00:05:11.040 EAL: Detected CPU lcores: 128 00:05:11.040 EAL: Detected NUMA nodes: 2 00:05:11.040 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:11.040 EAL: Detected shared linkage of DPDK 00:05:11.040 EAL: No shared files mode enabled, IPC will be disabled 00:05:11.040 EAL: Bus pci wants IOVA as 'DC' 00:05:11.040 EAL: Buses did not request a specific IOVA mode. 00:05:11.040 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:11.040 EAL: Selected IOVA mode 'VA' 00:05:11.040 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.040 EAL: Probing VFIO support... 00:05:11.040 EAL: IOMMU type 1 (Type 1) is supported 00:05:11.040 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:11.040 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:11.040 EAL: VFIO support initialized 00:05:11.040 EAL: Ask a virtual area of 0x2e000 bytes 00:05:11.040 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:11.040 EAL: Setting up physically contiguous memory... 00:05:11.040 EAL: Setting maximum number of open files to 524288 00:05:11.040 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:11.040 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:11.040 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:11.040 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.040 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:11.040 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.040 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.040 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:11.040 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:11.040 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.040 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:11.040 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.040 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.040 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:11.040 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:11.040 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.040 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:11.040 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.040 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.040 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:11.040 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:11.040 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.040 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:11.040 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.040 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.040 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:11.040 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:11.040 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:11.040 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.040 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:11.040 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:11.040 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.040 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:11.040 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:11.040 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.040 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:11.040 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:11.041 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.041 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:11.041 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:11.041 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.041 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:11.041 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:11.041 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.041 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:11.041 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:11.041 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.041 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:11.041 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:11.041 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.041 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:11.041 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:11.041 EAL: Hugepages will be freed exactly as allocated. 00:05:11.041 EAL: No shared files mode enabled, IPC is disabled 00:05:11.041 EAL: No shared files mode enabled, IPC is disabled 00:05:11.041 EAL: TSC frequency is ~2400000 KHz 00:05:11.041 EAL: Main lcore 0 is ready (tid=7fe9e485ca00;cpuset=[0]) 00:05:11.041 EAL: Trying to obtain current memory policy. 00:05:11.041 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.041 EAL: Restoring previous memory policy: 0 00:05:11.041 EAL: request: mp_malloc_sync 00:05:11.041 EAL: No shared files mode enabled, IPC is disabled 00:05:11.041 EAL: Heap on socket 0 was expanded by 2MB 00:05:11.041 EAL: No shared files mode enabled, IPC is disabled 00:05:11.041 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:11.041 EAL: Mem event callback 'spdk:(nil)' registered 00:05:11.041 00:05:11.041 00:05:11.041 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.041 http://cunit.sourceforge.net/ 00:05:11.041 00:05:11.041 00:05:11.041 Suite: components_suite 00:05:11.041 Test: vtophys_malloc_test ...passed 00:05:11.041 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:11.041 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.041 EAL: Restoring previous memory policy: 4 00:05:11.041 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.041 EAL: request: mp_malloc_sync 00:05:11.041 EAL: No shared files mode enabled, IPC is disabled 00:05:11.041 EAL: Heap on socket 0 was expanded by 4MB 00:05:11.041 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.041 EAL: request: mp_malloc_sync 00:05:11.041 EAL: No shared files mode enabled, IPC is disabled 00:05:11.041 EAL: Heap on socket 0 was shrunk by 4MB 00:05:11.041 EAL: Trying to obtain current memory policy. 00:05:11.041 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.041 EAL: Restoring previous memory policy: 4 00:05:11.041 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.041 EAL: request: mp_malloc_sync 00:05:11.041 EAL: No shared files mode enabled, IPC is disabled 00:05:11.041 EAL: Heap on socket 0 was expanded by 6MB 00:05:11.041 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.041 EAL: request: mp_malloc_sync 00:05:11.041 EAL: No shared files mode enabled, IPC is disabled 00:05:11.041 EAL: Heap on socket 0 was shrunk by 6MB 00:05:11.041 EAL: Trying to obtain current memory policy. 00:05:11.041 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.041 EAL: Restoring previous memory policy: 4 00:05:11.041 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.041 EAL: request: mp_malloc_sync 00:05:11.041 EAL: No shared files mode enabled, IPC is disabled 00:05:11.041 EAL: Heap on socket 0 was expanded by 10MB 00:05:11.041 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.041 EAL: request: mp_malloc_sync 00:05:11.041 EAL: No shared files mode enabled, IPC is disabled 00:05:11.041 EAL: Heap on socket 0 was shrunk by 10MB 00:05:11.041 EAL: Trying to obtain current memory policy. 00:05:11.041 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.041 EAL: Restoring previous memory policy: 4 00:05:11.041 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.041 EAL: request: mp_malloc_sync 00:05:11.041 EAL: No shared files mode enabled, IPC is disabled 00:05:11.041 EAL: Heap on socket 0 was expanded by 18MB 00:05:11.041 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.041 EAL: request: mp_malloc_sync 00:05:11.041 EAL: No shared files mode enabled, IPC is disabled 00:05:11.041 EAL: Heap on socket 0 was shrunk by 18MB 00:05:11.041 EAL: Trying to obtain current memory policy. 00:05:11.041 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.041 EAL: Restoring previous memory policy: 4 00:05:11.041 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.041 EAL: request: mp_malloc_sync 00:05:11.041 EAL: No shared files mode enabled, IPC is disabled 00:05:11.041 EAL: Heap on socket 0 was expanded by 34MB 00:05:11.041 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.041 EAL: request: mp_malloc_sync 00:05:11.041 EAL: No shared files mode enabled, IPC is disabled 00:05:11.041 EAL: Heap on socket 0 was shrunk by 34MB 00:05:11.041 EAL: Trying to obtain current memory policy. 00:05:11.041 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.041 EAL: Restoring previous memory policy: 4 00:05:11.041 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.041 EAL: request: mp_malloc_sync 00:05:11.041 EAL: No shared files mode enabled, IPC is disabled 00:05:11.041 EAL: Heap on socket 0 was expanded by 66MB 00:05:11.041 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.041 EAL: request: mp_malloc_sync 00:05:11.041 EAL: No shared files mode enabled, IPC is disabled 00:05:11.041 EAL: Heap on socket 0 was shrunk by 66MB 00:05:11.041 EAL: Trying to obtain current memory policy. 00:05:11.041 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.041 EAL: Restoring previous memory policy: 4 00:05:11.041 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.041 EAL: request: mp_malloc_sync 00:05:11.041 EAL: No shared files mode enabled, IPC is disabled 00:05:11.041 EAL: Heap on socket 0 was expanded by 130MB 00:05:11.041 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.041 EAL: request: mp_malloc_sync 00:05:11.041 EAL: No shared files mode enabled, IPC is disabled 00:05:11.041 EAL: Heap on socket 0 was shrunk by 130MB 00:05:11.041 EAL: Trying to obtain current memory policy. 00:05:11.041 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.300 EAL: Restoring previous memory policy: 4 00:05:11.300 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.300 EAL: request: mp_malloc_sync 00:05:11.300 EAL: No shared files mode enabled, IPC is disabled 00:05:11.300 EAL: Heap on socket 0 was expanded by 258MB 00:05:11.300 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.300 EAL: request: mp_malloc_sync 00:05:11.300 EAL: No shared files mode enabled, IPC is disabled 00:05:11.300 EAL: Heap on socket 0 was shrunk by 258MB 00:05:11.300 EAL: Trying to obtain current memory policy. 00:05:11.300 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.300 EAL: Restoring previous memory policy: 4 00:05:11.300 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.300 EAL: request: mp_malloc_sync 00:05:11.300 EAL: No shared files mode enabled, IPC is disabled 00:05:11.300 EAL: Heap on socket 0 was expanded by 514MB 00:05:11.300 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.300 EAL: request: mp_malloc_sync 00:05:11.300 EAL: No shared files mode enabled, IPC is disabled 00:05:11.300 EAL: Heap on socket 0 was shrunk by 514MB 00:05:11.300 EAL: Trying to obtain current memory policy. 00:05:11.300 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.559 EAL: Restoring previous memory policy: 4 00:05:11.559 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.559 EAL: request: mp_malloc_sync 00:05:11.559 EAL: No shared files mode enabled, IPC is disabled 00:05:11.559 EAL: Heap on socket 0 was expanded by 1026MB 00:05:11.559 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.819 EAL: request: mp_malloc_sync 00:05:11.819 EAL: No shared files mode enabled, IPC is disabled 00:05:11.819 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:11.819 passed 00:05:11.819 00:05:11.819 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.819 suites 1 1 n/a 0 0 00:05:11.819 tests 2 2 2 0 0 00:05:11.819 asserts 497 497 497 0 n/a 00:05:11.819 00:05:11.819 Elapsed time = 0.659 seconds 00:05:11.819 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.819 EAL: request: mp_malloc_sync 00:05:11.819 EAL: No shared files mode enabled, IPC is disabled 00:05:11.819 EAL: Heap on socket 0 was shrunk by 2MB 00:05:11.819 EAL: No shared files mode enabled, IPC is disabled 00:05:11.819 EAL: No shared files mode enabled, IPC is disabled 00:05:11.819 EAL: No shared files mode enabled, IPC is disabled 00:05:11.819 00:05:11.819 real 0m0.784s 00:05:11.819 user 0m0.423s 00:05:11.819 sys 0m0.336s 00:05:11.819 14:41:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:11.819 14:41:54 -- common/autotest_common.sh@10 -- # set +x 00:05:11.819 ************************************ 00:05:11.819 END TEST env_vtophys 00:05:11.819 ************************************ 00:05:11.819 14:41:54 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:11.819 14:41:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.819 14:41:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.819 14:41:54 -- common/autotest_common.sh@10 -- # set +x 00:05:11.819 ************************************ 00:05:11.819 START TEST env_pci 00:05:11.819 ************************************ 00:05:11.819 14:41:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:12.079 00:05:12.079 00:05:12.079 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.079 http://cunit.sourceforge.net/ 00:05:12.079 00:05:12.079 00:05:12.079 Suite: pci 00:05:12.079 Test: pci_hook ...[2024-04-26 14:41:54.493200] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 856682 has claimed it 00:05:12.079 EAL: Cannot find device (10000:00:01.0) 00:05:12.079 EAL: Failed to attach device on primary process 00:05:12.079 passed 00:05:12.079 00:05:12.079 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.079 suites 1 1 n/a 0 0 00:05:12.079 tests 1 1 1 0 0 00:05:12.079 asserts 25 25 25 0 n/a 00:05:12.079 00:05:12.079 Elapsed time = 0.031 seconds 00:05:12.079 00:05:12.079 real 0m0.052s 00:05:12.079 user 0m0.015s 00:05:12.079 sys 0m0.037s 00:05:12.079 14:41:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:12.079 14:41:54 -- common/autotest_common.sh@10 -- # set +x 00:05:12.079 ************************************ 00:05:12.079 END TEST env_pci 00:05:12.079 ************************************ 00:05:12.079 14:41:54 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:12.079 14:41:54 -- env/env.sh@15 -- # uname 00:05:12.079 14:41:54 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:12.079 14:41:54 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:12.079 14:41:54 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:12.079 14:41:54 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:12.079 14:41:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.079 14:41:54 -- common/autotest_common.sh@10 -- # set +x 00:05:12.079 ************************************ 00:05:12.079 START TEST env_dpdk_post_init 00:05:12.079 ************************************ 00:05:12.079 14:41:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:12.339 EAL: Detected CPU lcores: 128 00:05:12.339 EAL: Detected NUMA nodes: 2 00:05:12.339 EAL: Detected shared linkage of DPDK 00:05:12.339 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:12.339 EAL: Selected IOVA mode 'VA' 00:05:12.339 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.339 EAL: VFIO support initialized 00:05:12.339 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:12.339 EAL: Using IOMMU type 1 (Type 1) 00:05:12.339 EAL: Ignore mapping IO port bar(1) 00:05:12.600 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:12.600 EAL: Ignore mapping IO port bar(1) 00:05:12.861 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:12.861 EAL: Ignore mapping IO port bar(1) 00:05:12.861 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:13.121 EAL: Ignore mapping IO port bar(1) 00:05:13.121 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:13.381 EAL: Ignore mapping IO port bar(1) 00:05:13.382 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:13.642 EAL: Ignore mapping IO port bar(1) 00:05:13.642 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:13.642 EAL: Ignore mapping IO port bar(1) 00:05:13.902 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:13.902 EAL: Ignore mapping IO port bar(1) 00:05:14.162 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:14.422 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:14.422 EAL: Ignore mapping IO port bar(1) 00:05:14.422 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:14.682 EAL: Ignore mapping IO port bar(1) 00:05:14.682 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:14.943 EAL: Ignore mapping IO port bar(1) 00:05:14.943 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:15.204 EAL: Ignore mapping IO port bar(1) 00:05:15.204 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:15.204 EAL: Ignore mapping IO port bar(1) 00:05:15.465 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:15.465 EAL: Ignore mapping IO port bar(1) 00:05:15.761 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:15.761 EAL: Ignore mapping IO port bar(1) 00:05:16.055 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:16.055 EAL: Ignore mapping IO port bar(1) 00:05:16.055 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:16.055 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:16.055 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:16.321 Starting DPDK initialization... 00:05:16.321 Starting SPDK post initialization... 00:05:16.321 SPDK NVMe probe 00:05:16.321 Attaching to 0000:65:00.0 00:05:16.321 Attached to 0000:65:00.0 00:05:16.321 Cleaning up... 00:05:18.230 00:05:18.230 real 0m5.714s 00:05:18.230 user 0m0.187s 00:05:18.230 sys 0m0.075s 00:05:18.230 14:42:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:18.230 14:42:00 -- common/autotest_common.sh@10 -- # set +x 00:05:18.230 ************************************ 00:05:18.230 END TEST env_dpdk_post_init 00:05:18.230 ************************************ 00:05:18.230 14:42:00 -- env/env.sh@26 -- # uname 00:05:18.230 14:42:00 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:18.230 14:42:00 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:18.230 14:42:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.230 14:42:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.230 14:42:00 -- common/autotest_common.sh@10 -- # set +x 00:05:18.230 ************************************ 00:05:18.230 START TEST env_mem_callbacks 00:05:18.230 ************************************ 00:05:18.230 14:42:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:18.230 EAL: Detected CPU lcores: 128 00:05:18.230 EAL: Detected NUMA nodes: 2 00:05:18.230 EAL: Detected shared linkage of DPDK 00:05:18.230 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:18.230 EAL: Selected IOVA mode 'VA' 00:05:18.230 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.230 EAL: VFIO support initialized 00:05:18.230 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:18.230 00:05:18.230 00:05:18.230 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.230 http://cunit.sourceforge.net/ 00:05:18.230 00:05:18.230 00:05:18.230 Suite: memory 00:05:18.230 Test: test ... 00:05:18.230 register 0x200000200000 2097152 00:05:18.230 malloc 3145728 00:05:18.230 register 0x200000400000 4194304 00:05:18.230 buf 0x200000500000 len 3145728 PASSED 00:05:18.230 malloc 64 00:05:18.230 buf 0x2000004fff40 len 64 PASSED 00:05:18.230 malloc 4194304 00:05:18.230 register 0x200000800000 6291456 00:05:18.230 buf 0x200000a00000 len 4194304 PASSED 00:05:18.230 free 0x200000500000 3145728 00:05:18.231 free 0x2000004fff40 64 00:05:18.231 unregister 0x200000400000 4194304 PASSED 00:05:18.231 free 0x200000a00000 4194304 00:05:18.231 unregister 0x200000800000 6291456 PASSED 00:05:18.231 malloc 8388608 00:05:18.231 register 0x200000400000 10485760 00:05:18.231 buf 0x200000600000 len 8388608 PASSED 00:05:18.231 free 0x200000600000 8388608 00:05:18.231 unregister 0x200000400000 10485760 PASSED 00:05:18.231 passed 00:05:18.231 00:05:18.231 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.231 suites 1 1 n/a 0 0 00:05:18.231 tests 1 1 1 0 0 00:05:18.231 asserts 15 15 15 0 n/a 00:05:18.231 00:05:18.231 Elapsed time = 0.008 seconds 00:05:18.231 00:05:18.231 real 0m0.065s 00:05:18.231 user 0m0.019s 00:05:18.231 sys 0m0.046s 00:05:18.231 14:42:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:18.231 14:42:00 -- common/autotest_common.sh@10 -- # set +x 00:05:18.231 ************************************ 00:05:18.231 END TEST env_mem_callbacks 00:05:18.231 ************************************ 00:05:18.231 00:05:18.231 real 0m7.830s 00:05:18.231 user 0m1.198s 00:05:18.231 sys 0m1.072s 00:05:18.231 14:42:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:18.231 14:42:00 -- common/autotest_common.sh@10 -- # set +x 00:05:18.231 ************************************ 00:05:18.231 END TEST env 00:05:18.231 ************************************ 00:05:18.231 14:42:00 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:18.231 14:42:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.231 14:42:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.231 14:42:00 -- common/autotest_common.sh@10 -- # set +x 00:05:18.491 ************************************ 00:05:18.491 START TEST rpc 00:05:18.491 ************************************ 00:05:18.491 14:42:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:18.491 * Looking for test storage... 00:05:18.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:18.491 14:42:01 -- rpc/rpc.sh@65 -- # spdk_pid=858199 00:05:18.491 14:42:01 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.491 14:42:01 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:18.491 14:42:01 -- rpc/rpc.sh@67 -- # waitforlisten 858199 00:05:18.491 14:42:01 -- common/autotest_common.sh@817 -- # '[' -z 858199 ']' 00:05:18.491 14:42:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.491 14:42:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:18.491 14:42:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.491 14:42:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:18.491 14:42:01 -- common/autotest_common.sh@10 -- # set +x 00:05:18.491 [2024-04-26 14:42:01.080366] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:18.491 [2024-04-26 14:42:01.080421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid858199 ] 00:05:18.491 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.491 [2024-04-26 14:42:01.143249] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.751 [2024-04-26 14:42:01.206347] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:18.751 [2024-04-26 14:42:01.206389] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 858199' to capture a snapshot of events at runtime. 00:05:18.751 [2024-04-26 14:42:01.206396] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:18.751 [2024-04-26 14:42:01.206403] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:18.751 [2024-04-26 14:42:01.206408] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid858199 for offline analysis/debug. 00:05:18.751 [2024-04-26 14:42:01.206428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.319 14:42:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:19.319 14:42:01 -- common/autotest_common.sh@850 -- # return 0 00:05:19.319 14:42:01 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:19.319 14:42:01 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:19.319 14:42:01 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:19.319 14:42:01 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:19.319 14:42:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.319 14:42:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.319 14:42:01 -- common/autotest_common.sh@10 -- # set +x 00:05:19.579 ************************************ 00:05:19.579 START TEST rpc_integrity 00:05:19.579 ************************************ 00:05:19.579 14:42:02 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:05:19.579 14:42:02 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:19.579 14:42:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:19.579 14:42:02 -- common/autotest_common.sh@10 -- # set +x 00:05:19.579 14:42:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:19.579 14:42:02 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:19.579 14:42:02 -- rpc/rpc.sh@13 -- # jq length 00:05:19.579 14:42:02 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:19.579 14:42:02 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:19.579 14:42:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:19.579 14:42:02 -- common/autotest_common.sh@10 -- # set +x 00:05:19.579 14:42:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:19.579 14:42:02 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:19.579 14:42:02 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:19.579 14:42:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:19.579 14:42:02 -- common/autotest_common.sh@10 -- # set +x 00:05:19.579 14:42:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:19.579 14:42:02 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:19.579 { 00:05:19.579 "name": "Malloc0", 00:05:19.579 "aliases": [ 00:05:19.579 "c4643ba3-6102-4159-8221-5d14ae20d8b5" 00:05:19.579 ], 00:05:19.579 "product_name": "Malloc disk", 00:05:19.579 "block_size": 512, 00:05:19.579 "num_blocks": 16384, 00:05:19.579 "uuid": "c4643ba3-6102-4159-8221-5d14ae20d8b5", 00:05:19.579 "assigned_rate_limits": { 00:05:19.579 "rw_ios_per_sec": 0, 00:05:19.579 "rw_mbytes_per_sec": 0, 00:05:19.579 "r_mbytes_per_sec": 0, 00:05:19.579 "w_mbytes_per_sec": 0 00:05:19.579 }, 00:05:19.579 "claimed": false, 00:05:19.579 "zoned": false, 00:05:19.579 "supported_io_types": { 00:05:19.579 "read": true, 00:05:19.579 "write": true, 00:05:19.579 "unmap": true, 00:05:19.579 "write_zeroes": true, 00:05:19.579 "flush": true, 00:05:19.579 "reset": true, 00:05:19.579 "compare": false, 00:05:19.579 "compare_and_write": false, 00:05:19.579 "abort": true, 00:05:19.579 "nvme_admin": false, 00:05:19.579 "nvme_io": false 00:05:19.579 }, 00:05:19.579 "memory_domains": [ 00:05:19.579 { 00:05:19.579 "dma_device_id": "system", 00:05:19.579 "dma_device_type": 1 00:05:19.579 }, 00:05:19.579 { 00:05:19.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.579 "dma_device_type": 2 00:05:19.579 } 00:05:19.579 ], 00:05:19.579 "driver_specific": {} 00:05:19.579 } 00:05:19.579 ]' 00:05:19.579 14:42:02 -- rpc/rpc.sh@17 -- # jq length 00:05:19.579 14:42:02 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:19.579 14:42:02 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:19.579 14:42:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:19.579 14:42:02 -- common/autotest_common.sh@10 -- # set +x 00:05:19.579 [2024-04-26 14:42:02.146879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:19.579 [2024-04-26 14:42:02.146913] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:19.579 [2024-04-26 14:42:02.146926] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x909b30 00:05:19.579 [2024-04-26 14:42:02.146932] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:19.579 [2024-04-26 14:42:02.148290] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:19.579 [2024-04-26 14:42:02.148310] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:19.579 Passthru0 00:05:19.579 14:42:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:19.579 14:42:02 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:19.579 14:42:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:19.579 14:42:02 -- common/autotest_common.sh@10 -- # set +x 00:05:19.579 14:42:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:19.579 14:42:02 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:19.579 { 00:05:19.579 "name": "Malloc0", 00:05:19.579 "aliases": [ 00:05:19.580 "c4643ba3-6102-4159-8221-5d14ae20d8b5" 00:05:19.580 ], 00:05:19.580 "product_name": "Malloc disk", 00:05:19.580 "block_size": 512, 00:05:19.580 "num_blocks": 16384, 00:05:19.580 "uuid": "c4643ba3-6102-4159-8221-5d14ae20d8b5", 00:05:19.580 "assigned_rate_limits": { 00:05:19.580 "rw_ios_per_sec": 0, 00:05:19.580 "rw_mbytes_per_sec": 0, 00:05:19.580 "r_mbytes_per_sec": 0, 00:05:19.580 "w_mbytes_per_sec": 0 00:05:19.580 }, 00:05:19.580 "claimed": true, 00:05:19.580 "claim_type": "exclusive_write", 00:05:19.580 "zoned": false, 00:05:19.580 "supported_io_types": { 00:05:19.580 "read": true, 00:05:19.580 "write": true, 00:05:19.580 "unmap": true, 00:05:19.580 "write_zeroes": true, 00:05:19.580 "flush": true, 00:05:19.580 "reset": true, 00:05:19.580 "compare": false, 00:05:19.580 "compare_and_write": false, 00:05:19.580 "abort": true, 00:05:19.580 "nvme_admin": false, 00:05:19.580 "nvme_io": false 00:05:19.580 }, 00:05:19.580 "memory_domains": [ 00:05:19.580 { 00:05:19.580 "dma_device_id": "system", 00:05:19.580 "dma_device_type": 1 00:05:19.580 }, 00:05:19.580 { 00:05:19.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.580 "dma_device_type": 2 00:05:19.580 } 00:05:19.580 ], 00:05:19.580 "driver_specific": {} 00:05:19.580 }, 00:05:19.580 { 00:05:19.580 "name": "Passthru0", 00:05:19.580 "aliases": [ 00:05:19.580 "42f276bc-c1a6-5e67-bd7c-ae65d8727d72" 00:05:19.580 ], 00:05:19.580 "product_name": "passthru", 00:05:19.580 "block_size": 512, 00:05:19.580 "num_blocks": 16384, 00:05:19.580 "uuid": "42f276bc-c1a6-5e67-bd7c-ae65d8727d72", 00:05:19.580 "assigned_rate_limits": { 00:05:19.580 "rw_ios_per_sec": 0, 00:05:19.580 "rw_mbytes_per_sec": 0, 00:05:19.580 "r_mbytes_per_sec": 0, 00:05:19.580 "w_mbytes_per_sec": 0 00:05:19.580 }, 00:05:19.580 "claimed": false, 00:05:19.580 "zoned": false, 00:05:19.580 "supported_io_types": { 00:05:19.580 "read": true, 00:05:19.580 "write": true, 00:05:19.580 "unmap": true, 00:05:19.580 "write_zeroes": true, 00:05:19.580 "flush": true, 00:05:19.580 "reset": true, 00:05:19.580 "compare": false, 00:05:19.580 "compare_and_write": false, 00:05:19.580 "abort": true, 00:05:19.580 "nvme_admin": false, 00:05:19.580 "nvme_io": false 00:05:19.580 }, 00:05:19.580 "memory_domains": [ 00:05:19.580 { 00:05:19.580 "dma_device_id": "system", 00:05:19.580 "dma_device_type": 1 00:05:19.580 }, 00:05:19.580 { 00:05:19.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.580 "dma_device_type": 2 00:05:19.580 } 00:05:19.580 ], 00:05:19.580 "driver_specific": { 00:05:19.580 "passthru": { 00:05:19.580 "name": "Passthru0", 00:05:19.580 "base_bdev_name": "Malloc0" 00:05:19.580 } 00:05:19.580 } 00:05:19.580 } 00:05:19.580 ]' 00:05:19.580 14:42:02 -- rpc/rpc.sh@21 -- # jq length 00:05:19.580 14:42:02 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:19.580 14:42:02 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:19.580 14:42:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:19.580 14:42:02 -- common/autotest_common.sh@10 -- # set +x 00:05:19.580 14:42:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:19.580 14:42:02 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:19.580 14:42:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:19.580 14:42:02 -- common/autotest_common.sh@10 -- # set +x 00:05:19.580 14:42:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:19.580 14:42:02 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:19.580 14:42:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:19.580 14:42:02 -- common/autotest_common.sh@10 -- # set +x 00:05:19.840 14:42:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:19.840 14:42:02 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:19.840 14:42:02 -- rpc/rpc.sh@26 -- # jq length 00:05:19.840 14:42:02 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:19.840 00:05:19.840 real 0m0.291s 00:05:19.840 user 0m0.186s 00:05:19.840 sys 0m0.041s 00:05:19.840 14:42:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:19.840 14:42:02 -- common/autotest_common.sh@10 -- # set +x 00:05:19.840 ************************************ 00:05:19.840 END TEST rpc_integrity 00:05:19.840 ************************************ 00:05:19.840 14:42:02 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:19.840 14:42:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.840 14:42:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.840 14:42:02 -- common/autotest_common.sh@10 -- # set +x 00:05:19.840 ************************************ 00:05:19.840 START TEST rpc_plugins 00:05:19.840 ************************************ 00:05:19.840 14:42:02 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:05:19.840 14:42:02 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:19.840 14:42:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:19.840 14:42:02 -- common/autotest_common.sh@10 -- # set +x 00:05:19.840 14:42:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:19.840 14:42:02 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:19.840 14:42:02 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:19.840 14:42:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:19.840 14:42:02 -- common/autotest_common.sh@10 -- # set +x 00:05:20.099 14:42:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:20.099 14:42:02 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:20.099 { 00:05:20.099 "name": "Malloc1", 00:05:20.099 "aliases": [ 00:05:20.099 "b77a90e9-b6d4-4abd-8a44-e8610c75536d" 00:05:20.099 ], 00:05:20.099 "product_name": "Malloc disk", 00:05:20.099 "block_size": 4096, 00:05:20.099 "num_blocks": 256, 00:05:20.099 "uuid": "b77a90e9-b6d4-4abd-8a44-e8610c75536d", 00:05:20.099 "assigned_rate_limits": { 00:05:20.099 "rw_ios_per_sec": 0, 00:05:20.099 "rw_mbytes_per_sec": 0, 00:05:20.099 "r_mbytes_per_sec": 0, 00:05:20.099 "w_mbytes_per_sec": 0 00:05:20.099 }, 00:05:20.099 "claimed": false, 00:05:20.099 "zoned": false, 00:05:20.099 "supported_io_types": { 00:05:20.099 "read": true, 00:05:20.099 "write": true, 00:05:20.099 "unmap": true, 00:05:20.099 "write_zeroes": true, 00:05:20.099 "flush": true, 00:05:20.099 "reset": true, 00:05:20.099 "compare": false, 00:05:20.099 "compare_and_write": false, 00:05:20.099 "abort": true, 00:05:20.099 "nvme_admin": false, 00:05:20.099 "nvme_io": false 00:05:20.099 }, 00:05:20.099 "memory_domains": [ 00:05:20.099 { 00:05:20.099 "dma_device_id": "system", 00:05:20.099 "dma_device_type": 1 00:05:20.099 }, 00:05:20.099 { 00:05:20.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.099 "dma_device_type": 2 00:05:20.099 } 00:05:20.099 ], 00:05:20.099 "driver_specific": {} 00:05:20.099 } 00:05:20.099 ]' 00:05:20.099 14:42:02 -- rpc/rpc.sh@32 -- # jq length 00:05:20.099 14:42:02 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:20.099 14:42:02 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:20.099 14:42:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:20.099 14:42:02 -- common/autotest_common.sh@10 -- # set +x 00:05:20.099 14:42:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:20.099 14:42:02 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:20.099 14:42:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:20.099 14:42:02 -- common/autotest_common.sh@10 -- # set +x 00:05:20.099 14:42:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:20.099 14:42:02 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:20.099 14:42:02 -- rpc/rpc.sh@36 -- # jq length 00:05:20.099 14:42:02 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:20.099 00:05:20.099 real 0m0.147s 00:05:20.099 user 0m0.094s 00:05:20.099 sys 0m0.019s 00:05:20.099 14:42:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:20.099 14:42:02 -- common/autotest_common.sh@10 -- # set +x 00:05:20.099 ************************************ 00:05:20.099 END TEST rpc_plugins 00:05:20.099 ************************************ 00:05:20.099 14:42:02 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:20.099 14:42:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.099 14:42:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.099 14:42:02 -- common/autotest_common.sh@10 -- # set +x 00:05:20.358 ************************************ 00:05:20.358 START TEST rpc_trace_cmd_test 00:05:20.358 ************************************ 00:05:20.358 14:42:02 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:05:20.358 14:42:02 -- rpc/rpc.sh@40 -- # local info 00:05:20.358 14:42:02 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:20.358 14:42:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:20.358 14:42:02 -- common/autotest_common.sh@10 -- # set +x 00:05:20.358 14:42:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:20.358 14:42:02 -- rpc/rpc.sh@42 -- # info='{ 00:05:20.358 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid858199", 00:05:20.358 "tpoint_group_mask": "0x8", 00:05:20.358 "iscsi_conn": { 00:05:20.358 "mask": "0x2", 00:05:20.358 "tpoint_mask": "0x0" 00:05:20.358 }, 00:05:20.358 "scsi": { 00:05:20.358 "mask": "0x4", 00:05:20.358 "tpoint_mask": "0x0" 00:05:20.358 }, 00:05:20.358 "bdev": { 00:05:20.358 "mask": "0x8", 00:05:20.358 "tpoint_mask": "0xffffffffffffffff" 00:05:20.358 }, 00:05:20.358 "nvmf_rdma": { 00:05:20.358 "mask": "0x10", 00:05:20.358 "tpoint_mask": "0x0" 00:05:20.358 }, 00:05:20.358 "nvmf_tcp": { 00:05:20.358 "mask": "0x20", 00:05:20.358 "tpoint_mask": "0x0" 00:05:20.358 }, 00:05:20.358 "ftl": { 00:05:20.358 "mask": "0x40", 00:05:20.358 "tpoint_mask": "0x0" 00:05:20.358 }, 00:05:20.358 "blobfs": { 00:05:20.358 "mask": "0x80", 00:05:20.358 "tpoint_mask": "0x0" 00:05:20.358 }, 00:05:20.358 "dsa": { 00:05:20.358 "mask": "0x200", 00:05:20.358 "tpoint_mask": "0x0" 00:05:20.358 }, 00:05:20.358 "thread": { 00:05:20.358 "mask": "0x400", 00:05:20.358 "tpoint_mask": "0x0" 00:05:20.358 }, 00:05:20.358 "nvme_pcie": { 00:05:20.358 "mask": "0x800", 00:05:20.358 "tpoint_mask": "0x0" 00:05:20.358 }, 00:05:20.358 "iaa": { 00:05:20.358 "mask": "0x1000", 00:05:20.358 "tpoint_mask": "0x0" 00:05:20.358 }, 00:05:20.358 "nvme_tcp": { 00:05:20.358 "mask": "0x2000", 00:05:20.358 "tpoint_mask": "0x0" 00:05:20.358 }, 00:05:20.358 "bdev_nvme": { 00:05:20.358 "mask": "0x4000", 00:05:20.358 "tpoint_mask": "0x0" 00:05:20.358 }, 00:05:20.358 "sock": { 00:05:20.358 "mask": "0x8000", 00:05:20.358 "tpoint_mask": "0x0" 00:05:20.358 } 00:05:20.358 }' 00:05:20.358 14:42:02 -- rpc/rpc.sh@43 -- # jq length 00:05:20.358 14:42:02 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:20.358 14:42:02 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:20.358 14:42:02 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:20.358 14:42:02 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:20.358 14:42:02 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:20.358 14:42:02 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:20.358 14:42:03 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:20.358 14:42:03 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:20.618 14:42:03 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:20.618 00:05:20.618 real 0m0.244s 00:05:20.618 user 0m0.208s 00:05:20.618 sys 0m0.028s 00:05:20.618 14:42:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:20.618 14:42:03 -- common/autotest_common.sh@10 -- # set +x 00:05:20.618 ************************************ 00:05:20.618 END TEST rpc_trace_cmd_test 00:05:20.618 ************************************ 00:05:20.618 14:42:03 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:20.618 14:42:03 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:20.618 14:42:03 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:20.618 14:42:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.618 14:42:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.618 14:42:03 -- common/autotest_common.sh@10 -- # set +x 00:05:20.618 ************************************ 00:05:20.618 START TEST rpc_daemon_integrity 00:05:20.618 ************************************ 00:05:20.618 14:42:03 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:05:20.618 14:42:03 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:20.618 14:42:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:20.618 14:42:03 -- common/autotest_common.sh@10 -- # set +x 00:05:20.618 14:42:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:20.618 14:42:03 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:20.618 14:42:03 -- rpc/rpc.sh@13 -- # jq length 00:05:20.878 14:42:03 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:20.878 14:42:03 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:20.878 14:42:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:20.878 14:42:03 -- common/autotest_common.sh@10 -- # set +x 00:05:20.878 14:42:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:20.878 14:42:03 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:20.878 14:42:03 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:20.878 14:42:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:20.878 14:42:03 -- common/autotest_common.sh@10 -- # set +x 00:05:20.878 14:42:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:20.878 14:42:03 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:20.878 { 00:05:20.878 "name": "Malloc2", 00:05:20.878 "aliases": [ 00:05:20.878 "c40c9387-42a2-4fce-8d25-809781e205d2" 00:05:20.878 ], 00:05:20.878 "product_name": "Malloc disk", 00:05:20.878 "block_size": 512, 00:05:20.878 "num_blocks": 16384, 00:05:20.878 "uuid": "c40c9387-42a2-4fce-8d25-809781e205d2", 00:05:20.878 "assigned_rate_limits": { 00:05:20.878 "rw_ios_per_sec": 0, 00:05:20.878 "rw_mbytes_per_sec": 0, 00:05:20.878 "r_mbytes_per_sec": 0, 00:05:20.878 "w_mbytes_per_sec": 0 00:05:20.878 }, 00:05:20.878 "claimed": false, 00:05:20.878 "zoned": false, 00:05:20.878 "supported_io_types": { 00:05:20.878 "read": true, 00:05:20.878 "write": true, 00:05:20.878 "unmap": true, 00:05:20.878 "write_zeroes": true, 00:05:20.878 "flush": true, 00:05:20.878 "reset": true, 00:05:20.878 "compare": false, 00:05:20.878 "compare_and_write": false, 00:05:20.878 "abort": true, 00:05:20.878 "nvme_admin": false, 00:05:20.878 "nvme_io": false 00:05:20.878 }, 00:05:20.878 "memory_domains": [ 00:05:20.878 { 00:05:20.878 "dma_device_id": "system", 00:05:20.878 "dma_device_type": 1 00:05:20.878 }, 00:05:20.878 { 00:05:20.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.878 "dma_device_type": 2 00:05:20.878 } 00:05:20.878 ], 00:05:20.878 "driver_specific": {} 00:05:20.878 } 00:05:20.878 ]' 00:05:20.878 14:42:03 -- rpc/rpc.sh@17 -- # jq length 00:05:20.878 14:42:03 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:20.878 14:42:03 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:20.878 14:42:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:20.878 14:42:03 -- common/autotest_common.sh@10 -- # set +x 00:05:20.878 [2024-04-26 14:42:03.386214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:20.878 [2024-04-26 14:42:03.386245] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:20.878 [2024-04-26 14:42:03.386259] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaad720 00:05:20.878 [2024-04-26 14:42:03.386266] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:20.878 [2024-04-26 14:42:03.387484] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:20.878 [2024-04-26 14:42:03.387508] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:20.878 Passthru0 00:05:20.878 14:42:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:20.878 14:42:03 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:20.878 14:42:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:20.878 14:42:03 -- common/autotest_common.sh@10 -- # set +x 00:05:20.878 14:42:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:20.878 14:42:03 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:20.878 { 00:05:20.878 "name": "Malloc2", 00:05:20.878 "aliases": [ 00:05:20.878 "c40c9387-42a2-4fce-8d25-809781e205d2" 00:05:20.878 ], 00:05:20.878 "product_name": "Malloc disk", 00:05:20.878 "block_size": 512, 00:05:20.878 "num_blocks": 16384, 00:05:20.878 "uuid": "c40c9387-42a2-4fce-8d25-809781e205d2", 00:05:20.878 "assigned_rate_limits": { 00:05:20.878 "rw_ios_per_sec": 0, 00:05:20.878 "rw_mbytes_per_sec": 0, 00:05:20.878 "r_mbytes_per_sec": 0, 00:05:20.879 "w_mbytes_per_sec": 0 00:05:20.879 }, 00:05:20.879 "claimed": true, 00:05:20.879 "claim_type": "exclusive_write", 00:05:20.879 "zoned": false, 00:05:20.879 "supported_io_types": { 00:05:20.879 "read": true, 00:05:20.879 "write": true, 00:05:20.879 "unmap": true, 00:05:20.879 "write_zeroes": true, 00:05:20.879 "flush": true, 00:05:20.879 "reset": true, 00:05:20.879 "compare": false, 00:05:20.879 "compare_and_write": false, 00:05:20.879 "abort": true, 00:05:20.879 "nvme_admin": false, 00:05:20.879 "nvme_io": false 00:05:20.879 }, 00:05:20.879 "memory_domains": [ 00:05:20.879 { 00:05:20.879 "dma_device_id": "system", 00:05:20.879 "dma_device_type": 1 00:05:20.879 }, 00:05:20.879 { 00:05:20.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.879 "dma_device_type": 2 00:05:20.879 } 00:05:20.879 ], 00:05:20.879 "driver_specific": {} 00:05:20.879 }, 00:05:20.879 { 00:05:20.879 "name": "Passthru0", 00:05:20.879 "aliases": [ 00:05:20.879 "d7a7ad58-d28f-54ef-b576-ebcb877fb551" 00:05:20.879 ], 00:05:20.879 "product_name": "passthru", 00:05:20.879 "block_size": 512, 00:05:20.879 "num_blocks": 16384, 00:05:20.879 "uuid": "d7a7ad58-d28f-54ef-b576-ebcb877fb551", 00:05:20.879 "assigned_rate_limits": { 00:05:20.879 "rw_ios_per_sec": 0, 00:05:20.879 "rw_mbytes_per_sec": 0, 00:05:20.879 "r_mbytes_per_sec": 0, 00:05:20.879 "w_mbytes_per_sec": 0 00:05:20.879 }, 00:05:20.879 "claimed": false, 00:05:20.879 "zoned": false, 00:05:20.879 "supported_io_types": { 00:05:20.879 "read": true, 00:05:20.879 "write": true, 00:05:20.879 "unmap": true, 00:05:20.879 "write_zeroes": true, 00:05:20.879 "flush": true, 00:05:20.879 "reset": true, 00:05:20.879 "compare": false, 00:05:20.879 "compare_and_write": false, 00:05:20.879 "abort": true, 00:05:20.879 "nvme_admin": false, 00:05:20.879 "nvme_io": false 00:05:20.879 }, 00:05:20.879 "memory_domains": [ 00:05:20.879 { 00:05:20.879 "dma_device_id": "system", 00:05:20.879 "dma_device_type": 1 00:05:20.879 }, 00:05:20.879 { 00:05:20.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.879 "dma_device_type": 2 00:05:20.879 } 00:05:20.879 ], 00:05:20.879 "driver_specific": { 00:05:20.879 "passthru": { 00:05:20.879 "name": "Passthru0", 00:05:20.879 "base_bdev_name": "Malloc2" 00:05:20.879 } 00:05:20.879 } 00:05:20.879 } 00:05:20.879 ]' 00:05:20.879 14:42:03 -- rpc/rpc.sh@21 -- # jq length 00:05:20.879 14:42:03 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:20.879 14:42:03 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:20.879 14:42:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:20.879 14:42:03 -- common/autotest_common.sh@10 -- # set +x 00:05:20.879 14:42:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:20.879 14:42:03 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:20.879 14:42:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:20.879 14:42:03 -- common/autotest_common.sh@10 -- # set +x 00:05:20.879 14:42:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:20.879 14:42:03 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:20.879 14:42:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:20.879 14:42:03 -- common/autotest_common.sh@10 -- # set +x 00:05:20.879 14:42:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:20.879 14:42:03 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:20.879 14:42:03 -- rpc/rpc.sh@26 -- # jq length 00:05:20.879 14:42:03 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:20.879 00:05:20.879 real 0m0.298s 00:05:20.879 user 0m0.191s 00:05:20.879 sys 0m0.040s 00:05:20.879 14:42:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:20.879 14:42:03 -- common/autotest_common.sh@10 -- # set +x 00:05:20.879 ************************************ 00:05:20.879 END TEST rpc_daemon_integrity 00:05:20.879 ************************************ 00:05:21.139 14:42:03 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:21.139 14:42:03 -- rpc/rpc.sh@84 -- # killprocess 858199 00:05:21.139 14:42:03 -- common/autotest_common.sh@936 -- # '[' -z 858199 ']' 00:05:21.139 14:42:03 -- common/autotest_common.sh@940 -- # kill -0 858199 00:05:21.139 14:42:03 -- common/autotest_common.sh@941 -- # uname 00:05:21.139 14:42:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:21.139 14:42:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 858199 00:05:21.139 14:42:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:21.139 14:42:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:21.139 14:42:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 858199' 00:05:21.139 killing process with pid 858199 00:05:21.139 14:42:03 -- common/autotest_common.sh@955 -- # kill 858199 00:05:21.139 14:42:03 -- common/autotest_common.sh@960 -- # wait 858199 00:05:21.400 00:05:21.400 real 0m2.917s 00:05:21.400 user 0m3.873s 00:05:21.400 sys 0m0.879s 00:05:21.400 14:42:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:21.400 14:42:03 -- common/autotest_common.sh@10 -- # set +x 00:05:21.400 ************************************ 00:05:21.400 END TEST rpc 00:05:21.400 ************************************ 00:05:21.400 14:42:03 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:21.400 14:42:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.400 14:42:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.400 14:42:03 -- common/autotest_common.sh@10 -- # set +x 00:05:21.400 ************************************ 00:05:21.400 START TEST skip_rpc 00:05:21.400 ************************************ 00:05:21.400 14:42:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:21.660 * Looking for test storage... 00:05:21.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:21.660 14:42:04 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:21.660 14:42:04 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:21.660 14:42:04 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:21.660 14:42:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.660 14:42:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.660 14:42:04 -- common/autotest_common.sh@10 -- # set +x 00:05:21.660 ************************************ 00:05:21.660 START TEST skip_rpc 00:05:21.660 ************************************ 00:05:21.660 14:42:04 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:05:21.660 14:42:04 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=859138 00:05:21.660 14:42:04 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.660 14:42:04 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:21.660 14:42:04 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:21.919 [2024-04-26 14:42:04.337656] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:21.919 [2024-04-26 14:42:04.337705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid859138 ] 00:05:21.919 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.919 [2024-04-26 14:42:04.399069] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.919 [2024-04-26 14:42:04.462792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.218 14:42:09 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:27.218 14:42:09 -- common/autotest_common.sh@638 -- # local es=0 00:05:27.218 14:42:09 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:27.218 14:42:09 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:27.218 14:42:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:27.218 14:42:09 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:27.218 14:42:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:27.218 14:42:09 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:05:27.218 14:42:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:27.218 14:42:09 -- common/autotest_common.sh@10 -- # set +x 00:05:27.218 14:42:09 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:27.218 14:42:09 -- common/autotest_common.sh@641 -- # es=1 00:05:27.218 14:42:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:27.218 14:42:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:27.218 14:42:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:27.218 14:42:09 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:27.218 14:42:09 -- rpc/skip_rpc.sh@23 -- # killprocess 859138 00:05:27.218 14:42:09 -- common/autotest_common.sh@936 -- # '[' -z 859138 ']' 00:05:27.218 14:42:09 -- common/autotest_common.sh@940 -- # kill -0 859138 00:05:27.218 14:42:09 -- common/autotest_common.sh@941 -- # uname 00:05:27.218 14:42:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:27.218 14:42:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 859138 00:05:27.218 14:42:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:27.218 14:42:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:27.218 14:42:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 859138' 00:05:27.218 killing process with pid 859138 00:05:27.218 14:42:09 -- common/autotest_common.sh@955 -- # kill 859138 00:05:27.218 14:42:09 -- common/autotest_common.sh@960 -- # wait 859138 00:05:27.218 00:05:27.218 real 0m5.278s 00:05:27.218 user 0m5.099s 00:05:27.218 sys 0m0.211s 00:05:27.218 14:42:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:27.218 14:42:09 -- common/autotest_common.sh@10 -- # set +x 00:05:27.218 ************************************ 00:05:27.218 END TEST skip_rpc 00:05:27.218 ************************************ 00:05:27.218 14:42:09 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:27.218 14:42:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.218 14:42:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.218 14:42:09 -- common/autotest_common.sh@10 -- # set +x 00:05:27.218 ************************************ 00:05:27.218 START TEST skip_rpc_with_json 00:05:27.218 ************************************ 00:05:27.218 14:42:09 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:05:27.218 14:42:09 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:27.218 14:42:09 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=860655 00:05:27.218 14:42:09 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.218 14:42:09 -- rpc/skip_rpc.sh@31 -- # waitforlisten 860655 00:05:27.218 14:42:09 -- common/autotest_common.sh@817 -- # '[' -z 860655 ']' 00:05:27.218 14:42:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.218 14:42:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:27.218 14:42:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.218 14:42:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:27.218 14:42:09 -- common/autotest_common.sh@10 -- # set +x 00:05:27.218 14:42:09 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.218 [2024-04-26 14:42:09.802351] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:27.218 [2024-04-26 14:42:09.802403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid860655 ] 00:05:27.218 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.218 [2024-04-26 14:42:09.865031] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.479 [2024-04-26 14:42:09.929492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.479 14:42:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:27.479 14:42:10 -- common/autotest_common.sh@850 -- # return 0 00:05:27.479 14:42:10 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:27.479 14:42:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:27.479 14:42:10 -- common/autotest_common.sh@10 -- # set +x 00:05:27.479 [2024-04-26 14:42:10.100109] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:27.479 request: 00:05:27.479 { 00:05:27.479 "trtype": "tcp", 00:05:27.479 "method": "nvmf_get_transports", 00:05:27.479 "req_id": 1 00:05:27.479 } 00:05:27.479 Got JSON-RPC error response 00:05:27.479 response: 00:05:27.479 { 00:05:27.479 "code": -19, 00:05:27.479 "message": "No such device" 00:05:27.479 } 00:05:27.479 14:42:10 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:27.479 14:42:10 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:27.479 14:42:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:27.479 14:42:10 -- common/autotest_common.sh@10 -- # set +x 00:05:27.479 [2024-04-26 14:42:10.112225] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:27.479 14:42:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:27.479 14:42:10 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:27.479 14:42:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:27.479 14:42:10 -- common/autotest_common.sh@10 -- # set +x 00:05:27.741 14:42:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:27.741 14:42:10 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:27.741 { 00:05:27.741 "subsystems": [ 00:05:27.741 { 00:05:27.741 "subsystem": "vfio_user_target", 00:05:27.741 "config": null 00:05:27.741 }, 00:05:27.741 { 00:05:27.741 "subsystem": "keyring", 00:05:27.741 "config": [] 00:05:27.741 }, 00:05:27.741 { 00:05:27.741 "subsystem": "iobuf", 00:05:27.741 "config": [ 00:05:27.741 { 00:05:27.741 "method": "iobuf_set_options", 00:05:27.741 "params": { 00:05:27.741 "small_pool_count": 8192, 00:05:27.741 "large_pool_count": 1024, 00:05:27.741 "small_bufsize": 8192, 00:05:27.741 "large_bufsize": 135168 00:05:27.741 } 00:05:27.741 } 00:05:27.741 ] 00:05:27.741 }, 00:05:27.741 { 00:05:27.741 "subsystem": "sock", 00:05:27.741 "config": [ 00:05:27.741 { 00:05:27.741 "method": "sock_impl_set_options", 00:05:27.741 "params": { 00:05:27.741 "impl_name": "posix", 00:05:27.741 "recv_buf_size": 2097152, 00:05:27.741 "send_buf_size": 2097152, 00:05:27.741 "enable_recv_pipe": true, 00:05:27.741 "enable_quickack": false, 00:05:27.741 "enable_placement_id": 0, 00:05:27.741 "enable_zerocopy_send_server": true, 00:05:27.741 "enable_zerocopy_send_client": false, 00:05:27.741 "zerocopy_threshold": 0, 00:05:27.741 "tls_version": 0, 00:05:27.741 "enable_ktls": false 00:05:27.741 } 00:05:27.741 }, 00:05:27.741 { 00:05:27.741 "method": "sock_impl_set_options", 00:05:27.741 "params": { 00:05:27.741 "impl_name": "ssl", 00:05:27.741 "recv_buf_size": 4096, 00:05:27.741 "send_buf_size": 4096, 00:05:27.741 "enable_recv_pipe": true, 00:05:27.741 "enable_quickack": false, 00:05:27.741 "enable_placement_id": 0, 00:05:27.741 "enable_zerocopy_send_server": true, 00:05:27.741 "enable_zerocopy_send_client": false, 00:05:27.741 "zerocopy_threshold": 0, 00:05:27.741 "tls_version": 0, 00:05:27.741 "enable_ktls": false 00:05:27.741 } 00:05:27.741 } 00:05:27.741 ] 00:05:27.741 }, 00:05:27.741 { 00:05:27.741 "subsystem": "vmd", 00:05:27.741 "config": [] 00:05:27.741 }, 00:05:27.741 { 00:05:27.741 "subsystem": "accel", 00:05:27.741 "config": [ 00:05:27.741 { 00:05:27.741 "method": "accel_set_options", 00:05:27.741 "params": { 00:05:27.741 "small_cache_size": 128, 00:05:27.741 "large_cache_size": 16, 00:05:27.741 "task_count": 2048, 00:05:27.741 "sequence_count": 2048, 00:05:27.741 "buf_count": 2048 00:05:27.741 } 00:05:27.741 } 00:05:27.741 ] 00:05:27.741 }, 00:05:27.741 { 00:05:27.741 "subsystem": "bdev", 00:05:27.741 "config": [ 00:05:27.741 { 00:05:27.741 "method": "bdev_set_options", 00:05:27.741 "params": { 00:05:27.741 "bdev_io_pool_size": 65535, 00:05:27.741 "bdev_io_cache_size": 256, 00:05:27.741 "bdev_auto_examine": true, 00:05:27.741 "iobuf_small_cache_size": 128, 00:05:27.741 "iobuf_large_cache_size": 16 00:05:27.741 } 00:05:27.741 }, 00:05:27.741 { 00:05:27.741 "method": "bdev_raid_set_options", 00:05:27.741 "params": { 00:05:27.741 "process_window_size_kb": 1024 00:05:27.741 } 00:05:27.741 }, 00:05:27.741 { 00:05:27.741 "method": "bdev_iscsi_set_options", 00:05:27.741 "params": { 00:05:27.741 "timeout_sec": 30 00:05:27.741 } 00:05:27.741 }, 00:05:27.742 { 00:05:27.742 "method": "bdev_nvme_set_options", 00:05:27.742 "params": { 00:05:27.742 "action_on_timeout": "none", 00:05:27.742 "timeout_us": 0, 00:05:27.742 "timeout_admin_us": 0, 00:05:27.742 "keep_alive_timeout_ms": 10000, 00:05:27.742 "arbitration_burst": 0, 00:05:27.742 "low_priority_weight": 0, 00:05:27.742 "medium_priority_weight": 0, 00:05:27.742 "high_priority_weight": 0, 00:05:27.742 "nvme_adminq_poll_period_us": 10000, 00:05:27.742 "nvme_ioq_poll_period_us": 0, 00:05:27.742 "io_queue_requests": 0, 00:05:27.742 "delay_cmd_submit": true, 00:05:27.742 "transport_retry_count": 4, 00:05:27.742 "bdev_retry_count": 3, 00:05:27.742 "transport_ack_timeout": 0, 00:05:27.742 "ctrlr_loss_timeout_sec": 0, 00:05:27.742 "reconnect_delay_sec": 0, 00:05:27.742 "fast_io_fail_timeout_sec": 0, 00:05:27.742 "disable_auto_failback": false, 00:05:27.742 "generate_uuids": false, 00:05:27.742 "transport_tos": 0, 00:05:27.742 "nvme_error_stat": false, 00:05:27.742 "rdma_srq_size": 0, 00:05:27.742 "io_path_stat": false, 00:05:27.742 "allow_accel_sequence": false, 00:05:27.742 "rdma_max_cq_size": 0, 00:05:27.742 "rdma_cm_event_timeout_ms": 0, 00:05:27.742 "dhchap_digests": [ 00:05:27.742 "sha256", 00:05:27.742 "sha384", 00:05:27.742 "sha512" 00:05:27.742 ], 00:05:27.742 "dhchap_dhgroups": [ 00:05:27.742 "null", 00:05:27.742 "ffdhe2048", 00:05:27.742 "ffdhe3072", 00:05:27.742 "ffdhe4096", 00:05:27.742 "ffdhe6144", 00:05:27.742 "ffdhe8192" 00:05:27.742 ] 00:05:27.742 } 00:05:27.742 }, 00:05:27.742 { 00:05:27.742 "method": "bdev_nvme_set_hotplug", 00:05:27.742 "params": { 00:05:27.742 "period_us": 100000, 00:05:27.742 "enable": false 00:05:27.742 } 00:05:27.742 }, 00:05:27.742 { 00:05:27.742 "method": "bdev_wait_for_examine" 00:05:27.742 } 00:05:27.742 ] 00:05:27.742 }, 00:05:27.742 { 00:05:27.742 "subsystem": "scsi", 00:05:27.742 "config": null 00:05:27.742 }, 00:05:27.742 { 00:05:27.742 "subsystem": "scheduler", 00:05:27.742 "config": [ 00:05:27.742 { 00:05:27.742 "method": "framework_set_scheduler", 00:05:27.742 "params": { 00:05:27.742 "name": "static" 00:05:27.742 } 00:05:27.742 } 00:05:27.742 ] 00:05:27.742 }, 00:05:27.742 { 00:05:27.742 "subsystem": "vhost_scsi", 00:05:27.742 "config": [] 00:05:27.742 }, 00:05:27.742 { 00:05:27.742 "subsystem": "vhost_blk", 00:05:27.742 "config": [] 00:05:27.742 }, 00:05:27.742 { 00:05:27.742 "subsystem": "ublk", 00:05:27.742 "config": [] 00:05:27.742 }, 00:05:27.742 { 00:05:27.742 "subsystem": "nbd", 00:05:27.742 "config": [] 00:05:27.742 }, 00:05:27.742 { 00:05:27.742 "subsystem": "nvmf", 00:05:27.742 "config": [ 00:05:27.742 { 00:05:27.742 "method": "nvmf_set_config", 00:05:27.742 "params": { 00:05:27.742 "discovery_filter": "match_any", 00:05:27.742 "admin_cmd_passthru": { 00:05:27.742 "identify_ctrlr": false 00:05:27.742 } 00:05:27.742 } 00:05:27.742 }, 00:05:27.742 { 00:05:27.742 "method": "nvmf_set_max_subsystems", 00:05:27.742 "params": { 00:05:27.742 "max_subsystems": 1024 00:05:27.742 } 00:05:27.742 }, 00:05:27.742 { 00:05:27.742 "method": "nvmf_set_crdt", 00:05:27.742 "params": { 00:05:27.742 "crdt1": 0, 00:05:27.742 "crdt2": 0, 00:05:27.742 "crdt3": 0 00:05:27.742 } 00:05:27.742 }, 00:05:27.742 { 00:05:27.742 "method": "nvmf_create_transport", 00:05:27.742 "params": { 00:05:27.742 "trtype": "TCP", 00:05:27.742 "max_queue_depth": 128, 00:05:27.742 "max_io_qpairs_per_ctrlr": 127, 00:05:27.742 "in_capsule_data_size": 4096, 00:05:27.742 "max_io_size": 131072, 00:05:27.742 "io_unit_size": 131072, 00:05:27.742 "max_aq_depth": 128, 00:05:27.742 "num_shared_buffers": 511, 00:05:27.742 "buf_cache_size": 4294967295, 00:05:27.742 "dif_insert_or_strip": false, 00:05:27.742 "zcopy": false, 00:05:27.742 "c2h_success": true, 00:05:27.742 "sock_priority": 0, 00:05:27.742 "abort_timeout_sec": 1, 00:05:27.742 "ack_timeout": 0, 00:05:27.742 "data_wr_pool_size": 0 00:05:27.742 } 00:05:27.742 } 00:05:27.742 ] 00:05:27.742 }, 00:05:27.742 { 00:05:27.742 "subsystem": "iscsi", 00:05:27.742 "config": [ 00:05:27.742 { 00:05:27.742 "method": "iscsi_set_options", 00:05:27.742 "params": { 00:05:27.742 "node_base": "iqn.2016-06.io.spdk", 00:05:27.742 "max_sessions": 128, 00:05:27.742 "max_connections_per_session": 2, 00:05:27.742 "max_queue_depth": 64, 00:05:27.742 "default_time2wait": 2, 00:05:27.742 "default_time2retain": 20, 00:05:27.742 "first_burst_length": 8192, 00:05:27.742 "immediate_data": true, 00:05:27.742 "allow_duplicated_isid": false, 00:05:27.742 "error_recovery_level": 0, 00:05:27.742 "nop_timeout": 60, 00:05:27.742 "nop_in_interval": 30, 00:05:27.742 "disable_chap": false, 00:05:27.742 "require_chap": false, 00:05:27.742 "mutual_chap": false, 00:05:27.742 "chap_group": 0, 00:05:27.742 "max_large_datain_per_connection": 64, 00:05:27.742 "max_r2t_per_connection": 4, 00:05:27.742 "pdu_pool_size": 36864, 00:05:27.742 "immediate_data_pool_size": 16384, 00:05:27.742 "data_out_pool_size": 2048 00:05:27.742 } 00:05:27.742 } 00:05:27.742 ] 00:05:27.742 } 00:05:27.742 ] 00:05:27.742 } 00:05:27.742 14:42:10 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:27.742 14:42:10 -- rpc/skip_rpc.sh@40 -- # killprocess 860655 00:05:27.742 14:42:10 -- common/autotest_common.sh@936 -- # '[' -z 860655 ']' 00:05:27.742 14:42:10 -- common/autotest_common.sh@940 -- # kill -0 860655 00:05:27.742 14:42:10 -- common/autotest_common.sh@941 -- # uname 00:05:27.742 14:42:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:27.742 14:42:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 860655 00:05:27.742 14:42:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:27.743 14:42:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:27.743 14:42:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 860655' 00:05:27.743 killing process with pid 860655 00:05:27.743 14:42:10 -- common/autotest_common.sh@955 -- # kill 860655 00:05:27.743 14:42:10 -- common/autotest_common.sh@960 -- # wait 860655 00:05:28.004 14:42:10 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=860979 00:05:28.004 14:42:10 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:28.004 14:42:10 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:33.292 14:42:15 -- rpc/skip_rpc.sh@50 -- # killprocess 860979 00:05:33.292 14:42:15 -- common/autotest_common.sh@936 -- # '[' -z 860979 ']' 00:05:33.292 14:42:15 -- common/autotest_common.sh@940 -- # kill -0 860979 00:05:33.292 14:42:15 -- common/autotest_common.sh@941 -- # uname 00:05:33.292 14:42:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:33.292 14:42:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 860979 00:05:33.292 14:42:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:33.292 14:42:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:33.292 14:42:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 860979' 00:05:33.292 killing process with pid 860979 00:05:33.292 14:42:15 -- common/autotest_common.sh@955 -- # kill 860979 00:05:33.292 14:42:15 -- common/autotest_common.sh@960 -- # wait 860979 00:05:33.292 14:42:15 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:33.292 14:42:15 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:33.292 00:05:33.292 real 0m6.074s 00:05:33.292 user 0m5.904s 00:05:33.292 sys 0m0.475s 00:05:33.292 14:42:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:33.292 14:42:15 -- common/autotest_common.sh@10 -- # set +x 00:05:33.292 ************************************ 00:05:33.292 END TEST skip_rpc_with_json 00:05:33.292 ************************************ 00:05:33.292 14:42:15 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:33.292 14:42:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.292 14:42:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.292 14:42:15 -- common/autotest_common.sh@10 -- # set +x 00:05:33.553 ************************************ 00:05:33.553 START TEST skip_rpc_with_delay 00:05:33.553 ************************************ 00:05:33.553 14:42:16 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:05:33.553 14:42:16 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:33.553 14:42:16 -- common/autotest_common.sh@638 -- # local es=0 00:05:33.553 14:42:16 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:33.553 14:42:16 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.553 14:42:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:33.553 14:42:16 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.553 14:42:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:33.553 14:42:16 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.553 14:42:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:33.553 14:42:16 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.553 14:42:16 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:33.553 14:42:16 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:33.553 [2024-04-26 14:42:16.067789] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:33.553 [2024-04-26 14:42:16.067892] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:33.553 14:42:16 -- common/autotest_common.sh@641 -- # es=1 00:05:33.553 14:42:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:33.553 14:42:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:33.553 14:42:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:33.553 00:05:33.553 real 0m0.075s 00:05:33.553 user 0m0.041s 00:05:33.553 sys 0m0.034s 00:05:33.553 14:42:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:33.553 14:42:16 -- common/autotest_common.sh@10 -- # set +x 00:05:33.553 ************************************ 00:05:33.553 END TEST skip_rpc_with_delay 00:05:33.553 ************************************ 00:05:33.553 14:42:16 -- rpc/skip_rpc.sh@77 -- # uname 00:05:33.553 14:42:16 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:33.553 14:42:16 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:33.553 14:42:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.553 14:42:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.553 14:42:16 -- common/autotest_common.sh@10 -- # set +x 00:05:33.813 ************************************ 00:05:33.813 START TEST exit_on_failed_rpc_init 00:05:33.813 ************************************ 00:05:33.813 14:42:16 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:05:33.813 14:42:16 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=862052 00:05:33.813 14:42:16 -- rpc/skip_rpc.sh@63 -- # waitforlisten 862052 00:05:33.813 14:42:16 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.813 14:42:16 -- common/autotest_common.sh@817 -- # '[' -z 862052 ']' 00:05:33.813 14:42:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.813 14:42:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:33.813 14:42:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.813 14:42:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:33.813 14:42:16 -- common/autotest_common.sh@10 -- # set +x 00:05:33.813 [2024-04-26 14:42:16.333334] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:33.813 [2024-04-26 14:42:16.333389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid862052 ] 00:05:33.813 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.813 [2024-04-26 14:42:16.398101] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.813 [2024-04-26 14:42:16.470951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.797 14:42:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:34.797 14:42:17 -- common/autotest_common.sh@850 -- # return 0 00:05:34.797 14:42:17 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.797 14:42:17 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:34.797 14:42:17 -- common/autotest_common.sh@638 -- # local es=0 00:05:34.797 14:42:17 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:34.797 14:42:17 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.797 14:42:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:34.797 14:42:17 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.797 14:42:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:34.797 14:42:17 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.797 14:42:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:34.797 14:42:17 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.797 14:42:17 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:34.797 14:42:17 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:34.797 [2024-04-26 14:42:17.153400] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:34.797 [2024-04-26 14:42:17.153454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid862387 ] 00:05:34.797 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.797 [2024-04-26 14:42:17.229761] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.798 [2024-04-26 14:42:17.291858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.798 [2024-04-26 14:42:17.291921] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:34.798 [2024-04-26 14:42:17.291931] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:34.798 [2024-04-26 14:42:17.291937] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:34.798 14:42:17 -- common/autotest_common.sh@641 -- # es=234 00:05:34.798 14:42:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:34.798 14:42:17 -- common/autotest_common.sh@650 -- # es=106 00:05:34.798 14:42:17 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:34.798 14:42:17 -- common/autotest_common.sh@658 -- # es=1 00:05:34.798 14:42:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:34.798 14:42:17 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:34.798 14:42:17 -- rpc/skip_rpc.sh@70 -- # killprocess 862052 00:05:34.798 14:42:17 -- common/autotest_common.sh@936 -- # '[' -z 862052 ']' 00:05:34.798 14:42:17 -- common/autotest_common.sh@940 -- # kill -0 862052 00:05:34.798 14:42:17 -- common/autotest_common.sh@941 -- # uname 00:05:34.798 14:42:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:34.798 14:42:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 862052 00:05:34.798 14:42:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:34.798 14:42:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:34.798 14:42:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 862052' 00:05:34.798 killing process with pid 862052 00:05:34.798 14:42:17 -- common/autotest_common.sh@955 -- # kill 862052 00:05:34.798 14:42:17 -- common/autotest_common.sh@960 -- # wait 862052 00:05:35.058 00:05:35.058 real 0m1.334s 00:05:35.058 user 0m1.557s 00:05:35.058 sys 0m0.370s 00:05:35.058 14:42:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:35.058 14:42:17 -- common/autotest_common.sh@10 -- # set +x 00:05:35.058 ************************************ 00:05:35.058 END TEST exit_on_failed_rpc_init 00:05:35.058 ************************************ 00:05:35.058 14:42:17 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:35.058 00:05:35.058 real 0m13.621s 00:05:35.058 user 0m12.920s 00:05:35.058 sys 0m1.568s 00:05:35.058 14:42:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:35.058 14:42:17 -- common/autotest_common.sh@10 -- # set +x 00:05:35.058 ************************************ 00:05:35.058 END TEST skip_rpc 00:05:35.058 ************************************ 00:05:35.058 14:42:17 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:35.058 14:42:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.058 14:42:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.058 14:42:17 -- common/autotest_common.sh@10 -- # set +x 00:05:35.319 ************************************ 00:05:35.319 START TEST rpc_client 00:05:35.319 ************************************ 00:05:35.319 14:42:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:35.319 * Looking for test storage... 00:05:35.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:35.319 14:42:17 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:35.319 OK 00:05:35.319 14:42:17 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:35.319 00:05:35.319 real 0m0.132s 00:05:35.319 user 0m0.059s 00:05:35.319 sys 0m0.081s 00:05:35.319 14:42:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:35.319 14:42:17 -- common/autotest_common.sh@10 -- # set +x 00:05:35.319 ************************************ 00:05:35.319 END TEST rpc_client 00:05:35.319 ************************************ 00:05:35.580 14:42:18 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:35.580 14:42:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.580 14:42:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.580 14:42:18 -- common/autotest_common.sh@10 -- # set +x 00:05:35.580 ************************************ 00:05:35.580 START TEST json_config 00:05:35.580 ************************************ 00:05:35.580 14:42:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:35.580 14:42:18 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:35.580 14:42:18 -- nvmf/common.sh@7 -- # uname -s 00:05:35.580 14:42:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.580 14:42:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.580 14:42:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.580 14:42:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.580 14:42:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.580 14:42:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.580 14:42:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.580 14:42:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.580 14:42:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.580 14:42:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.842 14:42:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:35.842 14:42:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:35.842 14:42:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.842 14:42:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.842 14:42:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:35.842 14:42:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.842 14:42:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:35.842 14:42:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.842 14:42:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.842 14:42:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.842 14:42:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.842 14:42:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.842 14:42:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.842 14:42:18 -- paths/export.sh@5 -- # export PATH 00:05:35.842 14:42:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.842 14:42:18 -- nvmf/common.sh@47 -- # : 0 00:05:35.842 14:42:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:35.842 14:42:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:35.842 14:42:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.842 14:42:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.842 14:42:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.842 14:42:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:35.842 14:42:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:35.842 14:42:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:35.842 14:42:18 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:35.842 14:42:18 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:35.842 14:42:18 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:35.842 14:42:18 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:35.842 14:42:18 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:35.842 14:42:18 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:35.842 14:42:18 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:35.842 14:42:18 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:35.842 14:42:18 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:35.842 14:42:18 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:35.842 14:42:18 -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:35.842 14:42:18 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:35.842 14:42:18 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:35.842 14:42:18 -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:35.842 14:42:18 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:35.842 14:42:18 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:35.842 INFO: JSON configuration test init 00:05:35.842 14:42:18 -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:35.842 14:42:18 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:35.842 14:42:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:35.842 14:42:18 -- common/autotest_common.sh@10 -- # set +x 00:05:35.842 14:42:18 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:35.842 14:42:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:35.842 14:42:18 -- common/autotest_common.sh@10 -- # set +x 00:05:35.842 14:42:18 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:35.842 14:42:18 -- json_config/common.sh@9 -- # local app=target 00:05:35.842 14:42:18 -- json_config/common.sh@10 -- # shift 00:05:35.842 14:42:18 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:35.842 14:42:18 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:35.842 14:42:18 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:35.842 14:42:18 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.842 14:42:18 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.842 14:42:18 -- json_config/common.sh@22 -- # app_pid["$app"]=862576 00:05:35.842 14:42:18 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:35.842 Waiting for target to run... 00:05:35.842 14:42:18 -- json_config/common.sh@25 -- # waitforlisten 862576 /var/tmp/spdk_tgt.sock 00:05:35.842 14:42:18 -- common/autotest_common.sh@817 -- # '[' -z 862576 ']' 00:05:35.842 14:42:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:35.842 14:42:18 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:35.842 14:42:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:35.842 14:42:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:35.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:35.842 14:42:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:35.842 14:42:18 -- common/autotest_common.sh@10 -- # set +x 00:05:35.842 [2024-04-26 14:42:18.337352] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:35.842 [2024-04-26 14:42:18.337428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid862576 ] 00:05:35.842 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.102 [2024-04-26 14:42:18.587760] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.102 [2024-04-26 14:42:18.637082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.683 14:42:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:36.683 14:42:19 -- common/autotest_common.sh@850 -- # return 0 00:05:36.683 14:42:19 -- json_config/common.sh@26 -- # echo '' 00:05:36.683 00:05:36.683 14:42:19 -- json_config/json_config.sh@269 -- # create_accel_config 00:05:36.683 14:42:19 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:36.683 14:42:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:36.684 14:42:19 -- common/autotest_common.sh@10 -- # set +x 00:05:36.684 14:42:19 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:36.684 14:42:19 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:36.684 14:42:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:36.684 14:42:19 -- common/autotest_common.sh@10 -- # set +x 00:05:36.684 14:42:19 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:36.684 14:42:19 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:36.684 14:42:19 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:37.254 14:42:19 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:37.254 14:42:19 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:37.254 14:42:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:37.254 14:42:19 -- common/autotest_common.sh@10 -- # set +x 00:05:37.254 14:42:19 -- json_config/json_config.sh@45 -- # local ret=0 00:05:37.254 14:42:19 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:37.254 14:42:19 -- json_config/json_config.sh@46 -- # local enabled_types 00:05:37.254 14:42:19 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:37.254 14:42:19 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:37.254 14:42:19 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:37.254 14:42:19 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:37.254 14:42:19 -- json_config/json_config.sh@48 -- # local get_types 00:05:37.254 14:42:19 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:37.254 14:42:19 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:37.254 14:42:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:37.254 14:42:19 -- common/autotest_common.sh@10 -- # set +x 00:05:37.254 14:42:19 -- json_config/json_config.sh@55 -- # return 0 00:05:37.254 14:42:19 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:37.254 14:42:19 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:37.254 14:42:19 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:37.254 14:42:19 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:37.254 14:42:19 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:37.254 14:42:19 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:37.254 14:42:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:37.254 14:42:19 -- common/autotest_common.sh@10 -- # set +x 00:05:37.254 14:42:19 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:37.254 14:42:19 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:37.254 14:42:19 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:37.254 14:42:19 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:37.254 14:42:19 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:37.514 MallocForNvmf0 00:05:37.514 14:42:20 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:37.514 14:42:20 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:37.775 MallocForNvmf1 00:05:37.775 14:42:20 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:37.775 14:42:20 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:37.775 [2024-04-26 14:42:20.378095] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.775 14:42:20 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:37.775 14:42:20 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:38.036 14:42:20 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:38.036 14:42:20 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:38.295 14:42:20 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:38.295 14:42:20 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:38.295 14:42:20 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:38.295 14:42:20 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:38.555 [2024-04-26 14:42:21.028188] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:38.555 14:42:21 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:38.555 14:42:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:38.555 14:42:21 -- common/autotest_common.sh@10 -- # set +x 00:05:38.555 14:42:21 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:38.555 14:42:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:38.555 14:42:21 -- common/autotest_common.sh@10 -- # set +x 00:05:38.555 14:42:21 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:38.555 14:42:21 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:38.555 14:42:21 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:38.838 MallocBdevForConfigChangeCheck 00:05:38.838 14:42:21 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:38.838 14:42:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:38.838 14:42:21 -- common/autotest_common.sh@10 -- # set +x 00:05:38.838 14:42:21 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:38.838 14:42:21 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.097 14:42:21 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:39.097 INFO: shutting down applications... 00:05:39.097 14:42:21 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:39.097 14:42:21 -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:39.097 14:42:21 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:39.097 14:42:21 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:39.357 Calling clear_iscsi_subsystem 00:05:39.357 Calling clear_nvmf_subsystem 00:05:39.357 Calling clear_nbd_subsystem 00:05:39.357 Calling clear_ublk_subsystem 00:05:39.357 Calling clear_vhost_blk_subsystem 00:05:39.357 Calling clear_vhost_scsi_subsystem 00:05:39.357 Calling clear_bdev_subsystem 00:05:39.617 14:42:22 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:39.617 14:42:22 -- json_config/json_config.sh@343 -- # count=100 00:05:39.617 14:42:22 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:39.617 14:42:22 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:39.617 14:42:22 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.617 14:42:22 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:39.877 14:42:22 -- json_config/json_config.sh@345 -- # break 00:05:39.877 14:42:22 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:39.877 14:42:22 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:39.877 14:42:22 -- json_config/common.sh@31 -- # local app=target 00:05:39.877 14:42:22 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:39.877 14:42:22 -- json_config/common.sh@35 -- # [[ -n 862576 ]] 00:05:39.877 14:42:22 -- json_config/common.sh@38 -- # kill -SIGINT 862576 00:05:39.877 14:42:22 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:39.877 14:42:22 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.877 14:42:22 -- json_config/common.sh@41 -- # kill -0 862576 00:05:39.877 14:42:22 -- json_config/common.sh@45 -- # sleep 0.5 00:05:40.449 14:42:22 -- json_config/common.sh@40 -- # (( i++ )) 00:05:40.449 14:42:22 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.449 14:42:22 -- json_config/common.sh@41 -- # kill -0 862576 00:05:40.449 14:42:22 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:40.449 14:42:22 -- json_config/common.sh@43 -- # break 00:05:40.449 14:42:22 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:40.449 14:42:22 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:40.449 SPDK target shutdown done 00:05:40.449 14:42:22 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:40.449 INFO: relaunching applications... 00:05:40.449 14:42:22 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.449 14:42:22 -- json_config/common.sh@9 -- # local app=target 00:05:40.449 14:42:22 -- json_config/common.sh@10 -- # shift 00:05:40.449 14:42:22 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:40.449 14:42:22 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:40.449 14:42:22 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:40.449 14:42:22 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.449 14:42:22 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.449 14:42:22 -- json_config/common.sh@22 -- # app_pid["$app"]=863659 00:05:40.449 14:42:22 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:40.449 Waiting for target to run... 00:05:40.449 14:42:22 -- json_config/common.sh@25 -- # waitforlisten 863659 /var/tmp/spdk_tgt.sock 00:05:40.449 14:42:22 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.449 14:42:22 -- common/autotest_common.sh@817 -- # '[' -z 863659 ']' 00:05:40.449 14:42:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:40.449 14:42:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:40.449 14:42:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:40.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:40.449 14:42:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:40.449 14:42:22 -- common/autotest_common.sh@10 -- # set +x 00:05:40.449 [2024-04-26 14:42:22.968507] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:40.449 [2024-04-26 14:42:22.968570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid863659 ] 00:05:40.449 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.710 [2024-04-26 14:42:23.334614] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.970 [2024-04-26 14:42:23.385291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.232 [2024-04-26 14:42:23.873044] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:41.492 [2024-04-26 14:42:23.905410] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:41.492 14:42:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:41.492 14:42:23 -- common/autotest_common.sh@850 -- # return 0 00:05:41.492 14:42:23 -- json_config/common.sh@26 -- # echo '' 00:05:41.492 00:05:41.492 14:42:23 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:41.492 14:42:23 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:41.492 INFO: Checking if target configuration is the same... 00:05:41.492 14:42:23 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.492 14:42:23 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:41.492 14:42:23 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:41.492 + '[' 2 -ne 2 ']' 00:05:41.492 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:41.492 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:41.492 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:41.492 +++ basename /dev/fd/62 00:05:41.492 ++ mktemp /tmp/62.XXX 00:05:41.492 + tmp_file_1=/tmp/62.MXv 00:05:41.492 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.492 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:41.492 + tmp_file_2=/tmp/spdk_tgt_config.json.Dho 00:05:41.492 + ret=0 00:05:41.492 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:41.752 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:41.752 + diff -u /tmp/62.MXv /tmp/spdk_tgt_config.json.Dho 00:05:41.752 + echo 'INFO: JSON config files are the same' 00:05:41.752 INFO: JSON config files are the same 00:05:41.752 + rm /tmp/62.MXv /tmp/spdk_tgt_config.json.Dho 00:05:41.752 + exit 0 00:05:41.752 14:42:24 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:41.752 14:42:24 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:41.752 INFO: changing configuration and checking if this can be detected... 00:05:41.752 14:42:24 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:41.752 14:42:24 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:42.014 14:42:24 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.014 14:42:24 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:42.014 14:42:24 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:42.014 + '[' 2 -ne 2 ']' 00:05:42.014 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:42.014 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:42.014 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:42.014 +++ basename /dev/fd/62 00:05:42.014 ++ mktemp /tmp/62.XXX 00:05:42.014 + tmp_file_1=/tmp/62.TMu 00:05:42.014 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.014 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:42.014 + tmp_file_2=/tmp/spdk_tgt_config.json.590 00:05:42.014 + ret=0 00:05:42.014 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:42.274 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:42.274 + diff -u /tmp/62.TMu /tmp/spdk_tgt_config.json.590 00:05:42.274 + ret=1 00:05:42.274 + echo '=== Start of file: /tmp/62.TMu ===' 00:05:42.274 + cat /tmp/62.TMu 00:05:42.274 + echo '=== End of file: /tmp/62.TMu ===' 00:05:42.274 + echo '' 00:05:42.274 + echo '=== Start of file: /tmp/spdk_tgt_config.json.590 ===' 00:05:42.274 + cat /tmp/spdk_tgt_config.json.590 00:05:42.274 + echo '=== End of file: /tmp/spdk_tgt_config.json.590 ===' 00:05:42.274 + echo '' 00:05:42.274 + rm /tmp/62.TMu /tmp/spdk_tgt_config.json.590 00:05:42.274 + exit 1 00:05:42.274 14:42:24 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:42.274 INFO: configuration change detected. 00:05:42.274 14:42:24 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:42.274 14:42:24 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:42.274 14:42:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:42.274 14:42:24 -- common/autotest_common.sh@10 -- # set +x 00:05:42.274 14:42:24 -- json_config/json_config.sh@307 -- # local ret=0 00:05:42.274 14:42:24 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:42.274 14:42:24 -- json_config/json_config.sh@317 -- # [[ -n 863659 ]] 00:05:42.274 14:42:24 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:42.274 14:42:24 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:42.274 14:42:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:42.274 14:42:24 -- common/autotest_common.sh@10 -- # set +x 00:05:42.274 14:42:24 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:42.274 14:42:24 -- json_config/json_config.sh@193 -- # uname -s 00:05:42.274 14:42:24 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:42.274 14:42:24 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:42.274 14:42:24 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:42.274 14:42:24 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:42.274 14:42:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:42.274 14:42:24 -- common/autotest_common.sh@10 -- # set +x 00:05:42.274 14:42:24 -- json_config/json_config.sh@323 -- # killprocess 863659 00:05:42.274 14:42:24 -- common/autotest_common.sh@936 -- # '[' -z 863659 ']' 00:05:42.274 14:42:24 -- common/autotest_common.sh@940 -- # kill -0 863659 00:05:42.274 14:42:24 -- common/autotest_common.sh@941 -- # uname 00:05:42.274 14:42:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:42.274 14:42:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 863659 00:05:42.536 14:42:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:42.536 14:42:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:42.536 14:42:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 863659' 00:05:42.536 killing process with pid 863659 00:05:42.536 14:42:24 -- common/autotest_common.sh@955 -- # kill 863659 00:05:42.536 14:42:24 -- common/autotest_common.sh@960 -- # wait 863659 00:05:42.798 14:42:25 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.798 14:42:25 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:42.798 14:42:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:42.798 14:42:25 -- common/autotest_common.sh@10 -- # set +x 00:05:42.798 14:42:25 -- json_config/json_config.sh@328 -- # return 0 00:05:42.798 14:42:25 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:42.798 INFO: Success 00:05:42.798 00:05:42.798 real 0m7.109s 00:05:42.798 user 0m8.565s 00:05:42.798 sys 0m1.750s 00:05:42.798 14:42:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:42.798 14:42:25 -- common/autotest_common.sh@10 -- # set +x 00:05:42.798 ************************************ 00:05:42.798 END TEST json_config 00:05:42.798 ************************************ 00:05:42.798 14:42:25 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:42.798 14:42:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.798 14:42:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.798 14:42:25 -- common/autotest_common.sh@10 -- # set +x 00:05:42.798 ************************************ 00:05:42.798 START TEST json_config_extra_key 00:05:42.798 ************************************ 00:05:42.798 14:42:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:43.060 14:42:25 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:43.060 14:42:25 -- nvmf/common.sh@7 -- # uname -s 00:05:43.060 14:42:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.060 14:42:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.060 14:42:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.060 14:42:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.060 14:42:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.060 14:42:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.060 14:42:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.060 14:42:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.060 14:42:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.060 14:42:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.060 14:42:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:43.060 14:42:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:43.060 14:42:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.060 14:42:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.060 14:42:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:43.060 14:42:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:43.060 14:42:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:43.060 14:42:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.060 14:42:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.060 14:42:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.060 14:42:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.060 14:42:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.060 14:42:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.060 14:42:25 -- paths/export.sh@5 -- # export PATH 00:05:43.060 14:42:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.060 14:42:25 -- nvmf/common.sh@47 -- # : 0 00:05:43.060 14:42:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:43.060 14:42:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:43.060 14:42:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:43.060 14:42:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.060 14:42:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.060 14:42:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:43.060 14:42:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:43.060 14:42:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:43.060 14:42:25 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:43.060 14:42:25 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:43.060 14:42:25 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:43.060 14:42:25 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:43.060 14:42:25 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:43.060 14:42:25 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:43.060 14:42:25 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:43.060 14:42:25 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:43.060 14:42:25 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:43.060 14:42:25 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:43.060 14:42:25 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:43.060 INFO: launching applications... 00:05:43.060 14:42:25 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:43.060 14:42:25 -- json_config/common.sh@9 -- # local app=target 00:05:43.060 14:42:25 -- json_config/common.sh@10 -- # shift 00:05:43.060 14:42:25 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:43.060 14:42:25 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:43.060 14:42:25 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:43.060 14:42:25 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:43.060 14:42:25 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:43.060 14:42:25 -- json_config/common.sh@22 -- # app_pid["$app"]=864441 00:05:43.060 14:42:25 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:43.060 Waiting for target to run... 00:05:43.060 14:42:25 -- json_config/common.sh@25 -- # waitforlisten 864441 /var/tmp/spdk_tgt.sock 00:05:43.060 14:42:25 -- common/autotest_common.sh@817 -- # '[' -z 864441 ']' 00:05:43.060 14:42:25 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:43.060 14:42:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:43.060 14:42:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:43.060 14:42:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:43.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:43.060 14:42:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:43.060 14:42:25 -- common/autotest_common.sh@10 -- # set +x 00:05:43.060 [2024-04-26 14:42:25.619140] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:43.060 [2024-04-26 14:42:25.619215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid864441 ] 00:05:43.060 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.322 [2024-04-26 14:42:25.910047] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.322 [2024-04-26 14:42:25.963770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.892 14:42:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:43.892 14:42:26 -- common/autotest_common.sh@850 -- # return 0 00:05:43.892 14:42:26 -- json_config/common.sh@26 -- # echo '' 00:05:43.892 00:05:43.892 14:42:26 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:43.892 INFO: shutting down applications... 00:05:43.892 14:42:26 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:43.892 14:42:26 -- json_config/common.sh@31 -- # local app=target 00:05:43.892 14:42:26 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:43.892 14:42:26 -- json_config/common.sh@35 -- # [[ -n 864441 ]] 00:05:43.892 14:42:26 -- json_config/common.sh@38 -- # kill -SIGINT 864441 00:05:43.892 14:42:26 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:43.892 14:42:26 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:43.892 14:42:26 -- json_config/common.sh@41 -- # kill -0 864441 00:05:43.892 14:42:26 -- json_config/common.sh@45 -- # sleep 0.5 00:05:44.465 14:42:26 -- json_config/common.sh@40 -- # (( i++ )) 00:05:44.466 14:42:26 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.466 14:42:26 -- json_config/common.sh@41 -- # kill -0 864441 00:05:44.466 14:42:26 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:44.466 14:42:26 -- json_config/common.sh@43 -- # break 00:05:44.466 14:42:26 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:44.466 14:42:26 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:44.466 SPDK target shutdown done 00:05:44.466 14:42:26 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:44.466 Success 00:05:44.466 00:05:44.466 real 0m1.436s 00:05:44.466 user 0m1.049s 00:05:44.466 sys 0m0.401s 00:05:44.466 14:42:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:44.466 14:42:26 -- common/autotest_common.sh@10 -- # set +x 00:05:44.466 ************************************ 00:05:44.466 END TEST json_config_extra_key 00:05:44.466 ************************************ 00:05:44.466 14:42:26 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:44.466 14:42:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.466 14:42:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.466 14:42:26 -- common/autotest_common.sh@10 -- # set +x 00:05:44.466 ************************************ 00:05:44.466 START TEST alias_rpc 00:05:44.466 ************************************ 00:05:44.466 14:42:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:44.727 * Looking for test storage... 00:05:44.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:44.727 14:42:27 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:44.727 14:42:27 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=864827 00:05:44.727 14:42:27 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 864827 00:05:44.727 14:42:27 -- common/autotest_common.sh@817 -- # '[' -z 864827 ']' 00:05:44.727 14:42:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.727 14:42:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:44.727 14:42:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.727 14:42:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:44.727 14:42:27 -- common/autotest_common.sh@10 -- # set +x 00:05:44.727 14:42:27 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.727 [2024-04-26 14:42:27.252609] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:44.727 [2024-04-26 14:42:27.252674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid864827 ] 00:05:44.727 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.727 [2024-04-26 14:42:27.317651] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.988 [2024-04-26 14:42:27.393817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.558 14:42:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:45.558 14:42:28 -- common/autotest_common.sh@850 -- # return 0 00:05:45.558 14:42:28 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:45.558 14:42:28 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 864827 00:05:45.558 14:42:28 -- common/autotest_common.sh@936 -- # '[' -z 864827 ']' 00:05:45.558 14:42:28 -- common/autotest_common.sh@940 -- # kill -0 864827 00:05:45.558 14:42:28 -- common/autotest_common.sh@941 -- # uname 00:05:45.558 14:42:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:45.558 14:42:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 864827 00:05:45.818 14:42:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:45.818 14:42:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:45.818 14:42:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 864827' 00:05:45.818 killing process with pid 864827 00:05:45.818 14:42:28 -- common/autotest_common.sh@955 -- # kill 864827 00:05:45.818 14:42:28 -- common/autotest_common.sh@960 -- # wait 864827 00:05:45.818 00:05:45.818 real 0m1.363s 00:05:45.818 user 0m1.482s 00:05:45.818 sys 0m0.382s 00:05:45.818 14:42:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:45.818 14:42:28 -- common/autotest_common.sh@10 -- # set +x 00:05:45.818 ************************************ 00:05:45.818 END TEST alias_rpc 00:05:45.818 ************************************ 00:05:46.079 14:42:28 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:46.079 14:42:28 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:46.079 14:42:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.080 14:42:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.080 14:42:28 -- common/autotest_common.sh@10 -- # set +x 00:05:46.080 ************************************ 00:05:46.080 START TEST spdkcli_tcp 00:05:46.080 ************************************ 00:05:46.080 14:42:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:46.080 * Looking for test storage... 00:05:46.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:46.080 14:42:28 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:46.080 14:42:28 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:46.080 14:42:28 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:46.080 14:42:28 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:46.080 14:42:28 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:46.080 14:42:28 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:46.080 14:42:28 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:46.080 14:42:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:46.080 14:42:28 -- common/autotest_common.sh@10 -- # set +x 00:05:46.340 14:42:28 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=865146 00:05:46.340 14:42:28 -- spdkcli/tcp.sh@27 -- # waitforlisten 865146 00:05:46.340 14:42:28 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:46.340 14:42:28 -- common/autotest_common.sh@817 -- # '[' -z 865146 ']' 00:05:46.340 14:42:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.340 14:42:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:46.340 14:42:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.340 14:42:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:46.340 14:42:28 -- common/autotest_common.sh@10 -- # set +x 00:05:46.340 [2024-04-26 14:42:28.805772] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:46.340 [2024-04-26 14:42:28.805853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid865146 ] 00:05:46.340 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.340 [2024-04-26 14:42:28.872089] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.341 [2024-04-26 14:42:28.945357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.341 [2024-04-26 14:42:28.945360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.282 14:42:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:47.282 14:42:29 -- common/autotest_common.sh@850 -- # return 0 00:05:47.282 14:42:29 -- spdkcli/tcp.sh@31 -- # socat_pid=865244 00:05:47.282 14:42:29 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:47.282 14:42:29 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:47.282 [ 00:05:47.282 "bdev_malloc_delete", 00:05:47.282 "bdev_malloc_create", 00:05:47.282 "bdev_null_resize", 00:05:47.282 "bdev_null_delete", 00:05:47.282 "bdev_null_create", 00:05:47.282 "bdev_nvme_cuse_unregister", 00:05:47.282 "bdev_nvme_cuse_register", 00:05:47.282 "bdev_opal_new_user", 00:05:47.282 "bdev_opal_set_lock_state", 00:05:47.282 "bdev_opal_delete", 00:05:47.282 "bdev_opal_get_info", 00:05:47.282 "bdev_opal_create", 00:05:47.282 "bdev_nvme_opal_revert", 00:05:47.282 "bdev_nvme_opal_init", 00:05:47.282 "bdev_nvme_send_cmd", 00:05:47.282 "bdev_nvme_get_path_iostat", 00:05:47.282 "bdev_nvme_get_mdns_discovery_info", 00:05:47.282 "bdev_nvme_stop_mdns_discovery", 00:05:47.282 "bdev_nvme_start_mdns_discovery", 00:05:47.282 "bdev_nvme_set_multipath_policy", 00:05:47.282 "bdev_nvme_set_preferred_path", 00:05:47.282 "bdev_nvme_get_io_paths", 00:05:47.282 "bdev_nvme_remove_error_injection", 00:05:47.282 "bdev_nvme_add_error_injection", 00:05:47.282 "bdev_nvme_get_discovery_info", 00:05:47.282 "bdev_nvme_stop_discovery", 00:05:47.282 "bdev_nvme_start_discovery", 00:05:47.282 "bdev_nvme_get_controller_health_info", 00:05:47.282 "bdev_nvme_disable_controller", 00:05:47.282 "bdev_nvme_enable_controller", 00:05:47.282 "bdev_nvme_reset_controller", 00:05:47.282 "bdev_nvme_get_transport_statistics", 00:05:47.282 "bdev_nvme_apply_firmware", 00:05:47.282 "bdev_nvme_detach_controller", 00:05:47.282 "bdev_nvme_get_controllers", 00:05:47.282 "bdev_nvme_attach_controller", 00:05:47.282 "bdev_nvme_set_hotplug", 00:05:47.282 "bdev_nvme_set_options", 00:05:47.282 "bdev_passthru_delete", 00:05:47.282 "bdev_passthru_create", 00:05:47.282 "bdev_lvol_grow_lvstore", 00:05:47.282 "bdev_lvol_get_lvols", 00:05:47.282 "bdev_lvol_get_lvstores", 00:05:47.282 "bdev_lvol_delete", 00:05:47.282 "bdev_lvol_set_read_only", 00:05:47.282 "bdev_lvol_resize", 00:05:47.282 "bdev_lvol_decouple_parent", 00:05:47.282 "bdev_lvol_inflate", 00:05:47.282 "bdev_lvol_rename", 00:05:47.282 "bdev_lvol_clone_bdev", 00:05:47.282 "bdev_lvol_clone", 00:05:47.282 "bdev_lvol_snapshot", 00:05:47.282 "bdev_lvol_create", 00:05:47.282 "bdev_lvol_delete_lvstore", 00:05:47.282 "bdev_lvol_rename_lvstore", 00:05:47.282 "bdev_lvol_create_lvstore", 00:05:47.282 "bdev_raid_set_options", 00:05:47.282 "bdev_raid_remove_base_bdev", 00:05:47.282 "bdev_raid_add_base_bdev", 00:05:47.282 "bdev_raid_delete", 00:05:47.282 "bdev_raid_create", 00:05:47.282 "bdev_raid_get_bdevs", 00:05:47.282 "bdev_error_inject_error", 00:05:47.282 "bdev_error_delete", 00:05:47.282 "bdev_error_create", 00:05:47.282 "bdev_split_delete", 00:05:47.282 "bdev_split_create", 00:05:47.282 "bdev_delay_delete", 00:05:47.282 "bdev_delay_create", 00:05:47.282 "bdev_delay_update_latency", 00:05:47.282 "bdev_zone_block_delete", 00:05:47.282 "bdev_zone_block_create", 00:05:47.282 "blobfs_create", 00:05:47.282 "blobfs_detect", 00:05:47.282 "blobfs_set_cache_size", 00:05:47.282 "bdev_aio_delete", 00:05:47.282 "bdev_aio_rescan", 00:05:47.282 "bdev_aio_create", 00:05:47.282 "bdev_ftl_set_property", 00:05:47.282 "bdev_ftl_get_properties", 00:05:47.282 "bdev_ftl_get_stats", 00:05:47.282 "bdev_ftl_unmap", 00:05:47.282 "bdev_ftl_unload", 00:05:47.282 "bdev_ftl_delete", 00:05:47.282 "bdev_ftl_load", 00:05:47.282 "bdev_ftl_create", 00:05:47.282 "bdev_virtio_attach_controller", 00:05:47.282 "bdev_virtio_scsi_get_devices", 00:05:47.282 "bdev_virtio_detach_controller", 00:05:47.282 "bdev_virtio_blk_set_hotplug", 00:05:47.282 "bdev_iscsi_delete", 00:05:47.282 "bdev_iscsi_create", 00:05:47.282 "bdev_iscsi_set_options", 00:05:47.282 "accel_error_inject_error", 00:05:47.282 "ioat_scan_accel_module", 00:05:47.282 "dsa_scan_accel_module", 00:05:47.282 "iaa_scan_accel_module", 00:05:47.282 "vfu_virtio_create_scsi_endpoint", 00:05:47.282 "vfu_virtio_scsi_remove_target", 00:05:47.282 "vfu_virtio_scsi_add_target", 00:05:47.282 "vfu_virtio_create_blk_endpoint", 00:05:47.282 "vfu_virtio_delete_endpoint", 00:05:47.282 "keyring_file_remove_key", 00:05:47.282 "keyring_file_add_key", 00:05:47.282 "iscsi_get_histogram", 00:05:47.282 "iscsi_enable_histogram", 00:05:47.282 "iscsi_set_options", 00:05:47.282 "iscsi_get_auth_groups", 00:05:47.282 "iscsi_auth_group_remove_secret", 00:05:47.282 "iscsi_auth_group_add_secret", 00:05:47.282 "iscsi_delete_auth_group", 00:05:47.282 "iscsi_create_auth_group", 00:05:47.282 "iscsi_set_discovery_auth", 00:05:47.282 "iscsi_get_options", 00:05:47.282 "iscsi_target_node_request_logout", 00:05:47.282 "iscsi_target_node_set_redirect", 00:05:47.282 "iscsi_target_node_set_auth", 00:05:47.282 "iscsi_target_node_add_lun", 00:05:47.282 "iscsi_get_stats", 00:05:47.282 "iscsi_get_connections", 00:05:47.282 "iscsi_portal_group_set_auth", 00:05:47.282 "iscsi_start_portal_group", 00:05:47.282 "iscsi_delete_portal_group", 00:05:47.282 "iscsi_create_portal_group", 00:05:47.282 "iscsi_get_portal_groups", 00:05:47.282 "iscsi_delete_target_node", 00:05:47.282 "iscsi_target_node_remove_pg_ig_maps", 00:05:47.282 "iscsi_target_node_add_pg_ig_maps", 00:05:47.282 "iscsi_create_target_node", 00:05:47.282 "iscsi_get_target_nodes", 00:05:47.282 "iscsi_delete_initiator_group", 00:05:47.282 "iscsi_initiator_group_remove_initiators", 00:05:47.282 "iscsi_initiator_group_add_initiators", 00:05:47.282 "iscsi_create_initiator_group", 00:05:47.282 "iscsi_get_initiator_groups", 00:05:47.282 "nvmf_set_crdt", 00:05:47.282 "nvmf_set_config", 00:05:47.282 "nvmf_set_max_subsystems", 00:05:47.282 "nvmf_subsystem_get_listeners", 00:05:47.282 "nvmf_subsystem_get_qpairs", 00:05:47.282 "nvmf_subsystem_get_controllers", 00:05:47.282 "nvmf_get_stats", 00:05:47.282 "nvmf_get_transports", 00:05:47.282 "nvmf_create_transport", 00:05:47.282 "nvmf_get_targets", 00:05:47.282 "nvmf_delete_target", 00:05:47.282 "nvmf_create_target", 00:05:47.282 "nvmf_subsystem_allow_any_host", 00:05:47.282 "nvmf_subsystem_remove_host", 00:05:47.282 "nvmf_subsystem_add_host", 00:05:47.282 "nvmf_ns_remove_host", 00:05:47.282 "nvmf_ns_add_host", 00:05:47.282 "nvmf_subsystem_remove_ns", 00:05:47.282 "nvmf_subsystem_add_ns", 00:05:47.282 "nvmf_subsystem_listener_set_ana_state", 00:05:47.282 "nvmf_discovery_get_referrals", 00:05:47.282 "nvmf_discovery_remove_referral", 00:05:47.282 "nvmf_discovery_add_referral", 00:05:47.282 "nvmf_subsystem_remove_listener", 00:05:47.282 "nvmf_subsystem_add_listener", 00:05:47.282 "nvmf_delete_subsystem", 00:05:47.282 "nvmf_create_subsystem", 00:05:47.282 "nvmf_get_subsystems", 00:05:47.282 "env_dpdk_get_mem_stats", 00:05:47.282 "nbd_get_disks", 00:05:47.282 "nbd_stop_disk", 00:05:47.282 "nbd_start_disk", 00:05:47.282 "ublk_recover_disk", 00:05:47.282 "ublk_get_disks", 00:05:47.282 "ublk_stop_disk", 00:05:47.282 "ublk_start_disk", 00:05:47.282 "ublk_destroy_target", 00:05:47.282 "ublk_create_target", 00:05:47.282 "virtio_blk_create_transport", 00:05:47.282 "virtio_blk_get_transports", 00:05:47.282 "vhost_controller_set_coalescing", 00:05:47.282 "vhost_get_controllers", 00:05:47.282 "vhost_delete_controller", 00:05:47.282 "vhost_create_blk_controller", 00:05:47.282 "vhost_scsi_controller_remove_target", 00:05:47.282 "vhost_scsi_controller_add_target", 00:05:47.282 "vhost_start_scsi_controller", 00:05:47.282 "vhost_create_scsi_controller", 00:05:47.282 "thread_set_cpumask", 00:05:47.282 "framework_get_scheduler", 00:05:47.282 "framework_set_scheduler", 00:05:47.282 "framework_get_reactors", 00:05:47.282 "thread_get_io_channels", 00:05:47.282 "thread_get_pollers", 00:05:47.282 "thread_get_stats", 00:05:47.282 "framework_monitor_context_switch", 00:05:47.282 "spdk_kill_instance", 00:05:47.282 "log_enable_timestamps", 00:05:47.283 "log_get_flags", 00:05:47.283 "log_clear_flag", 00:05:47.283 "log_set_flag", 00:05:47.283 "log_get_level", 00:05:47.283 "log_set_level", 00:05:47.283 "log_get_print_level", 00:05:47.283 "log_set_print_level", 00:05:47.283 "framework_enable_cpumask_locks", 00:05:47.283 "framework_disable_cpumask_locks", 00:05:47.283 "framework_wait_init", 00:05:47.283 "framework_start_init", 00:05:47.283 "scsi_get_devices", 00:05:47.283 "bdev_get_histogram", 00:05:47.283 "bdev_enable_histogram", 00:05:47.283 "bdev_set_qos_limit", 00:05:47.283 "bdev_set_qd_sampling_period", 00:05:47.283 "bdev_get_bdevs", 00:05:47.283 "bdev_reset_iostat", 00:05:47.283 "bdev_get_iostat", 00:05:47.283 "bdev_examine", 00:05:47.283 "bdev_wait_for_examine", 00:05:47.283 "bdev_set_options", 00:05:47.283 "notify_get_notifications", 00:05:47.283 "notify_get_types", 00:05:47.283 "accel_get_stats", 00:05:47.283 "accel_set_options", 00:05:47.283 "accel_set_driver", 00:05:47.283 "accel_crypto_key_destroy", 00:05:47.283 "accel_crypto_keys_get", 00:05:47.283 "accel_crypto_key_create", 00:05:47.283 "accel_assign_opc", 00:05:47.283 "accel_get_module_info", 00:05:47.283 "accel_get_opc_assignments", 00:05:47.283 "vmd_rescan", 00:05:47.283 "vmd_remove_device", 00:05:47.283 "vmd_enable", 00:05:47.283 "sock_get_default_impl", 00:05:47.283 "sock_set_default_impl", 00:05:47.283 "sock_impl_set_options", 00:05:47.283 "sock_impl_get_options", 00:05:47.283 "iobuf_get_stats", 00:05:47.283 "iobuf_set_options", 00:05:47.283 "keyring_get_keys", 00:05:47.283 "framework_get_pci_devices", 00:05:47.283 "framework_get_config", 00:05:47.283 "framework_get_subsystems", 00:05:47.283 "vfu_tgt_set_base_path", 00:05:47.283 "trace_get_info", 00:05:47.283 "trace_get_tpoint_group_mask", 00:05:47.283 "trace_disable_tpoint_group", 00:05:47.283 "trace_enable_tpoint_group", 00:05:47.283 "trace_clear_tpoint_mask", 00:05:47.283 "trace_set_tpoint_mask", 00:05:47.283 "spdk_get_version", 00:05:47.283 "rpc_get_methods" 00:05:47.283 ] 00:05:47.283 14:42:29 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:47.283 14:42:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:47.283 14:42:29 -- common/autotest_common.sh@10 -- # set +x 00:05:47.283 14:42:29 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:47.283 14:42:29 -- spdkcli/tcp.sh@38 -- # killprocess 865146 00:05:47.283 14:42:29 -- common/autotest_common.sh@936 -- # '[' -z 865146 ']' 00:05:47.283 14:42:29 -- common/autotest_common.sh@940 -- # kill -0 865146 00:05:47.283 14:42:29 -- common/autotest_common.sh@941 -- # uname 00:05:47.283 14:42:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:47.283 14:42:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 865146 00:05:47.283 14:42:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:47.283 14:42:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:47.283 14:42:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 865146' 00:05:47.283 killing process with pid 865146 00:05:47.283 14:42:29 -- common/autotest_common.sh@955 -- # kill 865146 00:05:47.283 14:42:29 -- common/autotest_common.sh@960 -- # wait 865146 00:05:47.594 00:05:47.594 real 0m1.421s 00:05:47.594 user 0m2.611s 00:05:47.594 sys 0m0.429s 00:05:47.594 14:42:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:47.594 14:42:30 -- common/autotest_common.sh@10 -- # set +x 00:05:47.594 ************************************ 00:05:47.594 END TEST spdkcli_tcp 00:05:47.594 ************************************ 00:05:47.594 14:42:30 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:47.594 14:42:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:47.594 14:42:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.594 14:42:30 -- common/autotest_common.sh@10 -- # set +x 00:05:47.594 ************************************ 00:05:47.594 START TEST dpdk_mem_utility 00:05:47.594 ************************************ 00:05:47.594 14:42:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:47.889 * Looking for test storage... 00:05:47.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:47.889 14:42:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:47.889 14:42:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=865495 00:05:47.889 14:42:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 865495 00:05:47.889 14:42:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.889 14:42:30 -- common/autotest_common.sh@817 -- # '[' -z 865495 ']' 00:05:47.889 14:42:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.889 14:42:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:47.889 14:42:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.889 14:42:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:47.889 14:42:30 -- common/autotest_common.sh@10 -- # set +x 00:05:47.889 [2024-04-26 14:42:30.411071] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:47.889 [2024-04-26 14:42:30.411139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid865495 ] 00:05:47.889 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.889 [2024-04-26 14:42:30.476590] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.889 [2024-04-26 14:42:30.550186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.831 14:42:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:48.831 14:42:31 -- common/autotest_common.sh@850 -- # return 0 00:05:48.831 14:42:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:48.831 14:42:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:48.831 14:42:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:48.831 14:42:31 -- common/autotest_common.sh@10 -- # set +x 00:05:48.831 { 00:05:48.831 "filename": "/tmp/spdk_mem_dump.txt" 00:05:48.831 } 00:05:48.831 14:42:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:48.831 14:42:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:48.831 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:48.831 1 heaps totaling size 814.000000 MiB 00:05:48.831 size: 814.000000 MiB heap id: 0 00:05:48.831 end heaps---------- 00:05:48.831 8 mempools totaling size 598.116089 MiB 00:05:48.831 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:48.831 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:48.831 size: 84.521057 MiB name: bdev_io_865495 00:05:48.831 size: 51.011292 MiB name: evtpool_865495 00:05:48.831 size: 50.003479 MiB name: msgpool_865495 00:05:48.831 size: 21.763794 MiB name: PDU_Pool 00:05:48.831 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:48.831 size: 0.026123 MiB name: Session_Pool 00:05:48.831 end mempools------- 00:05:48.831 6 memzones totaling size 4.142822 MiB 00:05:48.831 size: 1.000366 MiB name: RG_ring_0_865495 00:05:48.831 size: 1.000366 MiB name: RG_ring_1_865495 00:05:48.831 size: 1.000366 MiB name: RG_ring_4_865495 00:05:48.831 size: 1.000366 MiB name: RG_ring_5_865495 00:05:48.831 size: 0.125366 MiB name: RG_ring_2_865495 00:05:48.831 size: 0.015991 MiB name: RG_ring_3_865495 00:05:48.831 end memzones------- 00:05:48.831 14:42:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:48.831 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:48.831 list of free elements. size: 12.519348 MiB 00:05:48.831 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:48.831 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:48.831 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:48.831 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:48.831 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:48.831 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:48.831 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:48.831 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:48.831 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:48.831 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:48.831 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:48.831 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:48.831 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:48.831 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:48.831 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:48.831 list of standard malloc elements. size: 199.218079 MiB 00:05:48.831 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:48.831 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:48.831 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:48.831 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:48.831 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:48.831 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:48.831 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:48.831 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:48.831 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:48.831 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:48.831 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:48.831 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:48.831 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:48.831 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:48.831 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:48.831 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:48.831 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:48.831 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:48.831 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:48.831 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:48.831 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:48.831 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:48.831 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:48.831 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:48.831 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:48.831 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:48.831 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:48.831 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:48.831 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:48.831 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:48.831 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:48.831 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:48.831 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:48.831 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:48.831 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:48.831 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:48.831 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:48.831 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:48.831 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:48.831 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:48.831 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:48.831 list of memzone associated elements. size: 602.262573 MiB 00:05:48.831 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:48.831 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:48.831 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:48.831 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:48.831 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:48.831 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_865495_0 00:05:48.831 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:48.831 associated memzone info: size: 48.002930 MiB name: MP_evtpool_865495_0 00:05:48.831 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:48.831 associated memzone info: size: 48.002930 MiB name: MP_msgpool_865495_0 00:05:48.831 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:48.831 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:48.831 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:48.831 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:48.831 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:48.831 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_865495 00:05:48.831 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:48.831 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_865495 00:05:48.831 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:48.831 associated memzone info: size: 1.007996 MiB name: MP_evtpool_865495 00:05:48.831 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:48.831 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:48.831 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:48.831 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:48.831 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:48.831 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:48.831 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:48.831 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:48.831 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:48.831 associated memzone info: size: 1.000366 MiB name: RG_ring_0_865495 00:05:48.831 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:48.831 associated memzone info: size: 1.000366 MiB name: RG_ring_1_865495 00:05:48.831 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:48.831 associated memzone info: size: 1.000366 MiB name: RG_ring_4_865495 00:05:48.831 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:48.831 associated memzone info: size: 1.000366 MiB name: RG_ring_5_865495 00:05:48.831 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:48.831 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_865495 00:05:48.831 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:48.831 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:48.831 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:48.831 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:48.831 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:48.831 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:48.831 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:48.831 associated memzone info: size: 0.125366 MiB name: RG_ring_2_865495 00:05:48.831 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:48.831 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:48.831 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:48.831 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:48.831 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:48.831 associated memzone info: size: 0.015991 MiB name: RG_ring_3_865495 00:05:48.832 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:48.832 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:48.832 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:48.832 associated memzone info: size: 0.000183 MiB name: MP_msgpool_865495 00:05:48.832 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:48.832 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_865495 00:05:48.832 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:48.832 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:48.832 14:42:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:48.832 14:42:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 865495 00:05:48.832 14:42:31 -- common/autotest_common.sh@936 -- # '[' -z 865495 ']' 00:05:48.832 14:42:31 -- common/autotest_common.sh@940 -- # kill -0 865495 00:05:48.832 14:42:31 -- common/autotest_common.sh@941 -- # uname 00:05:48.832 14:42:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:48.832 14:42:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 865495 00:05:48.832 14:42:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:48.832 14:42:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:48.832 14:42:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 865495' 00:05:48.832 killing process with pid 865495 00:05:48.832 14:42:31 -- common/autotest_common.sh@955 -- # kill 865495 00:05:48.832 14:42:31 -- common/autotest_common.sh@960 -- # wait 865495 00:05:49.092 00:05:49.092 real 0m1.274s 00:05:49.092 user 0m1.344s 00:05:49.092 sys 0m0.362s 00:05:49.092 14:42:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:49.092 14:42:31 -- common/autotest_common.sh@10 -- # set +x 00:05:49.092 ************************************ 00:05:49.092 END TEST dpdk_mem_utility 00:05:49.092 ************************************ 00:05:49.092 14:42:31 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:49.092 14:42:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:49.092 14:42:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.092 14:42:31 -- common/autotest_common.sh@10 -- # set +x 00:05:49.092 ************************************ 00:05:49.092 START TEST event 00:05:49.092 ************************************ 00:05:49.092 14:42:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:49.353 * Looking for test storage... 00:05:49.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:49.353 14:42:31 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:49.353 14:42:31 -- bdev/nbd_common.sh@6 -- # set -e 00:05:49.353 14:42:31 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:49.353 14:42:31 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:49.353 14:42:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.353 14:42:31 -- common/autotest_common.sh@10 -- # set +x 00:05:49.353 ************************************ 00:05:49.353 START TEST event_perf 00:05:49.353 ************************************ 00:05:49.353 14:42:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:49.353 Running I/O for 1 seconds...[2024-04-26 14:42:32.007560] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:49.353 [2024-04-26 14:42:32.007656] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid865871 ] 00:05:49.614 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.614 [2024-04-26 14:42:32.072541] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:49.614 [2024-04-26 14:42:32.138747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.614 [2024-04-26 14:42:32.138892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.614 [2024-04-26 14:42:32.138925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.614 [2024-04-26 14:42:32.138926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.555 Running I/O for 1 seconds... 00:05:50.555 lcore 0: 164850 00:05:50.555 lcore 1: 164848 00:05:50.555 lcore 2: 164846 00:05:50.555 lcore 3: 164848 00:05:50.555 done. 00:05:50.555 00:05:50.555 real 0m1.204s 00:05:50.555 user 0m4.133s 00:05:50.555 sys 0m0.071s 00:05:50.555 14:42:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:50.555 14:42:33 -- common/autotest_common.sh@10 -- # set +x 00:05:50.555 ************************************ 00:05:50.555 END TEST event_perf 00:05:50.555 ************************************ 00:05:50.816 14:42:33 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:50.816 14:42:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:50.816 14:42:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.816 14:42:33 -- common/autotest_common.sh@10 -- # set +x 00:05:50.816 ************************************ 00:05:50.816 START TEST event_reactor 00:05:50.816 ************************************ 00:05:50.816 14:42:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:50.816 [2024-04-26 14:42:33.394581] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:50.816 [2024-04-26 14:42:33.394676] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid866103 ] 00:05:50.816 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.816 [2024-04-26 14:42:33.459609] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.077 [2024-04-26 14:42:33.527200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.017 test_start 00:05:52.017 oneshot 00:05:52.017 tick 100 00:05:52.017 tick 100 00:05:52.017 tick 250 00:05:52.017 tick 100 00:05:52.017 tick 100 00:05:52.017 tick 100 00:05:52.017 tick 250 00:05:52.017 tick 500 00:05:52.017 tick 100 00:05:52.017 tick 100 00:05:52.017 tick 250 00:05:52.017 tick 100 00:05:52.017 tick 100 00:05:52.017 test_end 00:05:52.017 00:05:52.017 real 0m1.205s 00:05:52.017 user 0m1.132s 00:05:52.017 sys 0m0.069s 00:05:52.017 14:42:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:52.017 14:42:34 -- common/autotest_common.sh@10 -- # set +x 00:05:52.017 ************************************ 00:05:52.017 END TEST event_reactor 00:05:52.017 ************************************ 00:05:52.017 14:42:34 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.017 14:42:34 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:52.017 14:42:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.017 14:42:34 -- common/autotest_common.sh@10 -- # set +x 00:05:52.277 ************************************ 00:05:52.277 START TEST event_reactor_perf 00:05:52.277 ************************************ 00:05:52.277 14:42:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.277 [2024-04-26 14:42:34.788689] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:52.277 [2024-04-26 14:42:34.788795] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid866461 ] 00:05:52.277 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.277 [2024-04-26 14:42:34.855714] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.277 [2024-04-26 14:42:34.926594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.661 test_start 00:05:53.661 test_end 00:05:53.661 Performance: 365640 events per second 00:05:53.661 00:05:53.661 real 0m1.211s 00:05:53.661 user 0m1.124s 00:05:53.661 sys 0m0.082s 00:05:53.661 14:42:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:53.661 14:42:35 -- common/autotest_common.sh@10 -- # set +x 00:05:53.661 ************************************ 00:05:53.662 END TEST event_reactor_perf 00:05:53.662 ************************************ 00:05:53.662 14:42:36 -- event/event.sh@49 -- # uname -s 00:05:53.662 14:42:36 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:53.662 14:42:36 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:53.662 14:42:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:53.662 14:42:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.662 14:42:36 -- common/autotest_common.sh@10 -- # set +x 00:05:53.662 ************************************ 00:05:53.662 START TEST event_scheduler 00:05:53.662 ************************************ 00:05:53.662 14:42:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:53.662 * Looking for test storage... 00:05:53.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:53.662 14:42:36 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:53.662 14:42:36 -- scheduler/scheduler.sh@35 -- # scheduler_pid=866847 00:05:53.662 14:42:36 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.662 14:42:36 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:53.662 14:42:36 -- scheduler/scheduler.sh@37 -- # waitforlisten 866847 00:05:53.662 14:42:36 -- common/autotest_common.sh@817 -- # '[' -z 866847 ']' 00:05:53.662 14:42:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.662 14:42:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:53.662 14:42:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.662 14:42:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:53.662 14:42:36 -- common/autotest_common.sh@10 -- # set +x 00:05:53.662 [2024-04-26 14:42:36.322146] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:53.662 [2024-04-26 14:42:36.322212] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid866847 ] 00:05:53.922 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.922 [2024-04-26 14:42:36.377998] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:53.922 [2024-04-26 14:42:36.441482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.922 [2024-04-26 14:42:36.441645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.922 [2024-04-26 14:42:36.441801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.922 [2024-04-26 14:42:36.441802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.496 14:42:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:54.496 14:42:37 -- common/autotest_common.sh@850 -- # return 0 00:05:54.496 14:42:37 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:54.496 14:42:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:54.496 14:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:54.496 POWER: Env isn't set yet! 00:05:54.496 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:54.496 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:54.496 POWER: Cannot set governor of lcore 0 to userspace 00:05:54.496 POWER: Attempting to initialise PSTAT power management... 00:05:54.496 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:54.496 POWER: Initialized successfully for lcore 0 power management 00:05:54.496 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:54.496 POWER: Initialized successfully for lcore 1 power management 00:05:54.496 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:54.496 POWER: Initialized successfully for lcore 2 power management 00:05:54.496 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:54.496 POWER: Initialized successfully for lcore 3 power management 00:05:54.496 14:42:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:54.496 14:42:37 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:54.496 14:42:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:54.496 14:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:54.757 [2024-04-26 14:42:37.214401] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:54.757 14:42:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:54.757 14:42:37 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:54.757 14:42:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.757 14:42:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.757 14:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:54.757 ************************************ 00:05:54.757 START TEST scheduler_create_thread 00:05:54.757 ************************************ 00:05:54.757 14:42:37 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:05:54.757 14:42:37 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:54.757 14:42:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:54.757 14:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:54.757 2 00:05:54.757 14:42:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:54.757 14:42:37 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:54.758 14:42:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:54.758 14:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:54.758 3 00:05:54.758 14:42:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:54.758 14:42:37 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:54.758 14:42:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:54.758 14:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:54.758 4 00:05:54.758 14:42:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:54.758 14:42:37 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:54.758 14:42:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:54.758 14:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:55.019 5 00:05:55.019 14:42:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:55.019 14:42:37 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:55.019 14:42:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:55.019 14:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:55.019 6 00:05:55.019 14:42:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:55.019 14:42:37 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:55.019 14:42:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:55.019 14:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:55.019 7 00:05:55.019 14:42:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:55.019 14:42:37 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:55.019 14:42:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:55.019 14:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:55.019 8 00:05:55.019 14:42:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:55.019 14:42:37 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:55.019 14:42:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:55.019 14:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:55.281 9 00:05:55.281 14:42:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:55.281 14:42:37 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:55.281 14:42:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:55.281 14:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:56.668 10 00:05:56.668 14:42:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:56.668 14:42:39 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:56.668 14:42:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:56.669 14:42:39 -- common/autotest_common.sh@10 -- # set +x 00:05:58.053 14:42:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:58.053 14:42:40 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:58.053 14:42:40 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:58.053 14:42:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:58.053 14:42:40 -- common/autotest_common.sh@10 -- # set +x 00:05:58.624 14:42:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:58.625 14:42:41 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:58.625 14:42:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:58.625 14:42:41 -- common/autotest_common.sh@10 -- # set +x 00:05:59.566 14:42:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:59.566 14:42:42 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:59.566 14:42:42 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:59.566 14:42:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:59.566 14:42:42 -- common/autotest_common.sh@10 -- # set +x 00:06:00.136 14:42:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:00.136 00:06:00.136 real 0m5.396s 00:06:00.136 user 0m0.026s 00:06:00.136 sys 0m0.004s 00:06:00.136 14:42:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:00.136 14:42:42 -- common/autotest_common.sh@10 -- # set +x 00:06:00.136 ************************************ 00:06:00.136 END TEST scheduler_create_thread 00:06:00.136 ************************************ 00:06:00.136 14:42:42 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:00.136 14:42:42 -- scheduler/scheduler.sh@46 -- # killprocess 866847 00:06:00.136 14:42:42 -- common/autotest_common.sh@936 -- # '[' -z 866847 ']' 00:06:00.136 14:42:42 -- common/autotest_common.sh@940 -- # kill -0 866847 00:06:00.395 14:42:42 -- common/autotest_common.sh@941 -- # uname 00:06:00.395 14:42:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:00.395 14:42:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 866847 00:06:00.395 14:42:42 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:00.395 14:42:42 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:00.395 14:42:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 866847' 00:06:00.395 killing process with pid 866847 00:06:00.396 14:42:42 -- common/autotest_common.sh@955 -- # kill 866847 00:06:00.396 14:42:42 -- common/autotest_common.sh@960 -- # wait 866847 00:06:00.396 [2024-04-26 14:42:43.042362] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:00.656 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:00.656 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:00.656 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:00.656 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:00.656 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:00.656 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:00.656 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:00.656 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:00.656 00:06:00.656 real 0m7.047s 00:06:00.656 user 0m14.243s 00:06:00.656 sys 0m0.388s 00:06:00.656 14:42:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:00.656 14:42:43 -- common/autotest_common.sh@10 -- # set +x 00:06:00.656 ************************************ 00:06:00.656 END TEST event_scheduler 00:06:00.656 ************************************ 00:06:00.656 14:42:43 -- event/event.sh@51 -- # modprobe -n nbd 00:06:00.656 14:42:43 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:00.656 14:42:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:00.656 14:42:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.656 14:42:43 -- common/autotest_common.sh@10 -- # set +x 00:06:00.916 ************************************ 00:06:00.916 START TEST app_repeat 00:06:00.916 ************************************ 00:06:00.916 14:42:43 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:06:00.916 14:42:43 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.916 14:42:43 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.916 14:42:43 -- event/event.sh@13 -- # local nbd_list 00:06:00.916 14:42:43 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.916 14:42:43 -- event/event.sh@14 -- # local bdev_list 00:06:00.916 14:42:43 -- event/event.sh@15 -- # local repeat_times=4 00:06:00.916 14:42:43 -- event/event.sh@17 -- # modprobe nbd 00:06:00.916 14:42:43 -- event/event.sh@19 -- # repeat_pid=868263 00:06:00.916 14:42:43 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.916 14:42:43 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:00.916 14:42:43 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 868263' 00:06:00.916 Process app_repeat pid: 868263 00:06:00.916 14:42:43 -- event/event.sh@23 -- # for i in {0..2} 00:06:00.916 14:42:43 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:00.916 spdk_app_start Round 0 00:06:00.916 14:42:43 -- event/event.sh@25 -- # waitforlisten 868263 /var/tmp/spdk-nbd.sock 00:06:00.916 14:42:43 -- common/autotest_common.sh@817 -- # '[' -z 868263 ']' 00:06:00.916 14:42:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.916 14:42:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:00.916 14:42:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.916 14:42:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:00.916 14:42:43 -- common/autotest_common.sh@10 -- # set +x 00:06:00.916 [2024-04-26 14:42:43.446251] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:00.916 [2024-04-26 14:42:43.446324] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid868263 ] 00:06:00.916 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.916 [2024-04-26 14:42:43.508991] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.916 [2024-04-26 14:42:43.574245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.916 [2024-04-26 14:42:43.574247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.858 14:42:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:01.858 14:42:44 -- common/autotest_common.sh@850 -- # return 0 00:06:01.858 14:42:44 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.858 Malloc0 00:06:01.858 14:42:44 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.119 Malloc1 00:06:02.119 14:42:44 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.119 14:42:44 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.119 14:42:44 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.119 14:42:44 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:02.119 14:42:44 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.119 14:42:44 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:02.119 14:42:44 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.119 14:42:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.119 14:42:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.119 14:42:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:02.119 14:42:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.119 14:42:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:02.119 14:42:44 -- bdev/nbd_common.sh@12 -- # local i 00:06:02.119 14:42:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:02.119 14:42:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.119 14:42:44 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:02.119 /dev/nbd0 00:06:02.119 14:42:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:02.119 14:42:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:02.119 14:42:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:02.119 14:42:44 -- common/autotest_common.sh@855 -- # local i 00:06:02.119 14:42:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:02.119 14:42:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:02.119 14:42:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:02.119 14:42:44 -- common/autotest_common.sh@859 -- # break 00:06:02.119 14:42:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:02.119 14:42:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:02.119 14:42:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.119 1+0 records in 00:06:02.119 1+0 records out 00:06:02.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203716 s, 20.1 MB/s 00:06:02.119 14:42:44 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:02.119 14:42:44 -- common/autotest_common.sh@872 -- # size=4096 00:06:02.119 14:42:44 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:02.119 14:42:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:02.119 14:42:44 -- common/autotest_common.sh@875 -- # return 0 00:06:02.119 14:42:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.119 14:42:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.119 14:42:44 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:02.379 /dev/nbd1 00:06:02.379 14:42:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:02.379 14:42:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:02.379 14:42:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:02.379 14:42:44 -- common/autotest_common.sh@855 -- # local i 00:06:02.379 14:42:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:02.379 14:42:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:02.379 14:42:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:02.379 14:42:44 -- common/autotest_common.sh@859 -- # break 00:06:02.379 14:42:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:02.379 14:42:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:02.379 14:42:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.379 1+0 records in 00:06:02.379 1+0 records out 00:06:02.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216563 s, 18.9 MB/s 00:06:02.379 14:42:44 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:02.379 14:42:44 -- common/autotest_common.sh@872 -- # size=4096 00:06:02.379 14:42:44 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:02.379 14:42:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:02.379 14:42:44 -- common/autotest_common.sh@875 -- # return 0 00:06:02.379 14:42:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.379 14:42:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.379 14:42:44 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.379 14:42:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.379 14:42:44 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.639 14:42:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:02.639 { 00:06:02.639 "nbd_device": "/dev/nbd0", 00:06:02.639 "bdev_name": "Malloc0" 00:06:02.639 }, 00:06:02.639 { 00:06:02.639 "nbd_device": "/dev/nbd1", 00:06:02.639 "bdev_name": "Malloc1" 00:06:02.639 } 00:06:02.639 ]' 00:06:02.639 14:42:45 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:02.639 { 00:06:02.639 "nbd_device": "/dev/nbd0", 00:06:02.639 "bdev_name": "Malloc0" 00:06:02.639 }, 00:06:02.639 { 00:06:02.639 "nbd_device": "/dev/nbd1", 00:06:02.639 "bdev_name": "Malloc1" 00:06:02.639 } 00:06:02.639 ]' 00:06:02.639 14:42:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.639 14:42:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:02.639 /dev/nbd1' 00:06:02.639 14:42:45 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:02.639 /dev/nbd1' 00:06:02.639 14:42:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.639 14:42:45 -- bdev/nbd_common.sh@65 -- # count=2 00:06:02.639 14:42:45 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:02.639 14:42:45 -- bdev/nbd_common.sh@95 -- # count=2 00:06:02.639 14:42:45 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:02.639 14:42:45 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:02.639 14:42:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:02.640 256+0 records in 00:06:02.640 256+0 records out 00:06:02.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115926 s, 90.5 MB/s 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:02.640 256+0 records in 00:06:02.640 256+0 records out 00:06:02.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158362 s, 66.2 MB/s 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:02.640 256+0 records in 00:06:02.640 256+0 records out 00:06:02.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168458 s, 62.2 MB/s 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@51 -- # local i 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.640 14:42:45 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:02.900 14:42:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:02.900 14:42:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:02.900 14:42:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:02.900 14:42:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.900 14:42:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.900 14:42:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:02.900 14:42:45 -- bdev/nbd_common.sh@41 -- # break 00:06:02.900 14:42:45 -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.900 14:42:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.900 14:42:45 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.900 14:42:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.900 14:42:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.900 14:42:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.900 14:42:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.900 14:42:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.900 14:42:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:03.160 14:42:45 -- bdev/nbd_common.sh@41 -- # break 00:06:03.160 14:42:45 -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.160 14:42:45 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.160 14:42:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.160 14:42:45 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.160 14:42:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:03.160 14:42:45 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:03.160 14:42:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.160 14:42:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:03.160 14:42:45 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:03.160 14:42:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.160 14:42:45 -- bdev/nbd_common.sh@65 -- # true 00:06:03.160 14:42:45 -- bdev/nbd_common.sh@65 -- # count=0 00:06:03.160 14:42:45 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:03.160 14:42:45 -- bdev/nbd_common.sh@104 -- # count=0 00:06:03.160 14:42:45 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:03.160 14:42:45 -- bdev/nbd_common.sh@109 -- # return 0 00:06:03.160 14:42:45 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:03.419 14:42:45 -- event/event.sh@35 -- # sleep 3 00:06:03.419 [2024-04-26 14:42:46.072637] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.678 [2024-04-26 14:42:46.133611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.678 [2024-04-26 14:42:46.133612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.678 [2024-04-26 14:42:46.165454] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:03.678 [2024-04-26 14:42:46.165490] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:06.979 14:42:48 -- event/event.sh@23 -- # for i in {0..2} 00:06:06.979 14:42:48 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:06.979 spdk_app_start Round 1 00:06:06.979 14:42:48 -- event/event.sh@25 -- # waitforlisten 868263 /var/tmp/spdk-nbd.sock 00:06:06.979 14:42:48 -- common/autotest_common.sh@817 -- # '[' -z 868263 ']' 00:06:06.979 14:42:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.979 14:42:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:06.979 14:42:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.979 14:42:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:06.979 14:42:48 -- common/autotest_common.sh@10 -- # set +x 00:06:06.979 14:42:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:06.979 14:42:49 -- common/autotest_common.sh@850 -- # return 0 00:06:06.979 14:42:49 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.979 Malloc0 00:06:06.979 14:42:49 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.979 Malloc1 00:06:06.979 14:42:49 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.979 14:42:49 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.979 14:42:49 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.979 14:42:49 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:06.979 14:42:49 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.979 14:42:49 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:06.979 14:42:49 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.979 14:42:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.979 14:42:49 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.979 14:42:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:06.979 14:42:49 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.979 14:42:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:06.979 14:42:49 -- bdev/nbd_common.sh@12 -- # local i 00:06:06.979 14:42:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:06.979 14:42:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.980 14:42:49 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:06.980 /dev/nbd0 00:06:06.980 14:42:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:06.980 14:42:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:06.980 14:42:49 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:06.980 14:42:49 -- common/autotest_common.sh@855 -- # local i 00:06:06.980 14:42:49 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:06.980 14:42:49 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:06.980 14:42:49 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:06.980 14:42:49 -- common/autotest_common.sh@859 -- # break 00:06:06.980 14:42:49 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:06.980 14:42:49 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:06.980 14:42:49 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.980 1+0 records in 00:06:06.980 1+0 records out 00:06:06.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293913 s, 13.9 MB/s 00:06:06.980 14:42:49 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.980 14:42:49 -- common/autotest_common.sh@872 -- # size=4096 00:06:06.980 14:42:49 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.980 14:42:49 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:06.980 14:42:49 -- common/autotest_common.sh@875 -- # return 0 00:06:06.980 14:42:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.980 14:42:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.980 14:42:49 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:07.241 /dev/nbd1 00:06:07.241 14:42:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:07.241 14:42:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:07.241 14:42:49 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:07.241 14:42:49 -- common/autotest_common.sh@855 -- # local i 00:06:07.241 14:42:49 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:07.241 14:42:49 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:07.241 14:42:49 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:07.241 14:42:49 -- common/autotest_common.sh@859 -- # break 00:06:07.241 14:42:49 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:07.241 14:42:49 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:07.241 14:42:49 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.241 1+0 records in 00:06:07.241 1+0 records out 00:06:07.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222798 s, 18.4 MB/s 00:06:07.241 14:42:49 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.241 14:42:49 -- common/autotest_common.sh@872 -- # size=4096 00:06:07.241 14:42:49 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.241 14:42:49 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:07.241 14:42:49 -- common/autotest_common.sh@875 -- # return 0 00:06:07.241 14:42:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.241 14:42:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.241 14:42:49 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.241 14:42:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.241 14:42:49 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.503 14:42:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:07.503 { 00:06:07.503 "nbd_device": "/dev/nbd0", 00:06:07.503 "bdev_name": "Malloc0" 00:06:07.503 }, 00:06:07.503 { 00:06:07.503 "nbd_device": "/dev/nbd1", 00:06:07.503 "bdev_name": "Malloc1" 00:06:07.503 } 00:06:07.503 ]' 00:06:07.503 14:42:49 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:07.503 { 00:06:07.503 "nbd_device": "/dev/nbd0", 00:06:07.503 "bdev_name": "Malloc0" 00:06:07.503 }, 00:06:07.503 { 00:06:07.503 "nbd_device": "/dev/nbd1", 00:06:07.503 "bdev_name": "Malloc1" 00:06:07.503 } 00:06:07.503 ]' 00:06:07.503 14:42:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.503 14:42:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:07.503 /dev/nbd1' 00:06:07.503 14:42:49 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:07.503 /dev/nbd1' 00:06:07.503 14:42:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.503 14:42:49 -- bdev/nbd_common.sh@65 -- # count=2 00:06:07.503 14:42:49 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:07.503 14:42:49 -- bdev/nbd_common.sh@95 -- # count=2 00:06:07.503 14:42:49 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:07.503 14:42:49 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:07.503 14:42:49 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.503 14:42:49 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.503 14:42:49 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:07.503 14:42:49 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.503 14:42:49 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:07.503 14:42:49 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:07.503 256+0 records in 00:06:07.503 256+0 records out 00:06:07.503 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122789 s, 85.4 MB/s 00:06:07.503 14:42:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.503 14:42:50 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:07.503 256+0 records in 00:06:07.503 256+0 records out 00:06:07.503 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161922 s, 64.8 MB/s 00:06:07.503 14:42:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.503 14:42:50 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:07.503 256+0 records in 00:06:07.503 256+0 records out 00:06:07.503 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170761 s, 61.4 MB/s 00:06:07.503 14:42:50 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:07.503 14:42:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.503 14:42:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.503 14:42:50 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:07.503 14:42:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.503 14:42:50 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:07.503 14:42:50 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:07.503 14:42:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.503 14:42:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:07.503 14:42:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.503 14:42:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:07.503 14:42:50 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.503 14:42:50 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:07.503 14:42:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.503 14:42:50 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.503 14:42:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:07.503 14:42:50 -- bdev/nbd_common.sh@51 -- # local i 00:06:07.503 14:42:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.503 14:42:50 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.764 14:42:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.764 14:42:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.764 14:42:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.764 14:42:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.764 14:42:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.764 14:42:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.764 14:42:50 -- bdev/nbd_common.sh@41 -- # break 00:06:07.764 14:42:50 -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.764 14:42:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.764 14:42:50 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:07.764 14:42:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:07.764 14:42:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:07.764 14:42:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:07.764 14:42:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.764 14:42:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.764 14:42:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:07.764 14:42:50 -- bdev/nbd_common.sh@41 -- # break 00:06:07.764 14:42:50 -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.764 14:42:50 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.764 14:42:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.764 14:42:50 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.025 14:42:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:08.025 14:42:50 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:08.025 14:42:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.025 14:42:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:08.025 14:42:50 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:08.025 14:42:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.025 14:42:50 -- bdev/nbd_common.sh@65 -- # true 00:06:08.025 14:42:50 -- bdev/nbd_common.sh@65 -- # count=0 00:06:08.025 14:42:50 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:08.025 14:42:50 -- bdev/nbd_common.sh@104 -- # count=0 00:06:08.025 14:42:50 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:08.025 14:42:50 -- bdev/nbd_common.sh@109 -- # return 0 00:06:08.025 14:42:50 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:08.286 14:42:50 -- event/event.sh@35 -- # sleep 3 00:06:08.286 [2024-04-26 14:42:50.915548] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.547 [2024-04-26 14:42:50.977193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.547 [2024-04-26 14:42:50.977194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.547 [2024-04-26 14:42:51.009820] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:08.547 [2024-04-26 14:42:51.009863] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:11.851 14:42:53 -- event/event.sh@23 -- # for i in {0..2} 00:06:11.851 14:42:53 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:11.851 spdk_app_start Round 2 00:06:11.851 14:42:53 -- event/event.sh@25 -- # waitforlisten 868263 /var/tmp/spdk-nbd.sock 00:06:11.851 14:42:53 -- common/autotest_common.sh@817 -- # '[' -z 868263 ']' 00:06:11.851 14:42:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.851 14:42:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:11.851 14:42:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.851 14:42:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:11.851 14:42:53 -- common/autotest_common.sh@10 -- # set +x 00:06:11.851 14:42:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:11.851 14:42:53 -- common/autotest_common.sh@850 -- # return 0 00:06:11.851 14:42:53 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.851 Malloc0 00:06:11.851 14:42:54 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.851 Malloc1 00:06:11.852 14:42:54 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.852 14:42:54 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.852 14:42:54 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.852 14:42:54 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:11.852 14:42:54 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.852 14:42:54 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:11.852 14:42:54 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.852 14:42:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.852 14:42:54 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.852 14:42:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:11.852 14:42:54 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.852 14:42:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:11.852 14:42:54 -- bdev/nbd_common.sh@12 -- # local i 00:06:11.852 14:42:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:11.852 14:42:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.852 14:42:54 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:11.852 /dev/nbd0 00:06:11.852 14:42:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:11.852 14:42:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:11.852 14:42:54 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:11.852 14:42:54 -- common/autotest_common.sh@855 -- # local i 00:06:11.852 14:42:54 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:11.852 14:42:54 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:11.852 14:42:54 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:11.852 14:42:54 -- common/autotest_common.sh@859 -- # break 00:06:11.852 14:42:54 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:11.852 14:42:54 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:11.852 14:42:54 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.852 1+0 records in 00:06:11.852 1+0 records out 00:06:11.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327981 s, 12.5 MB/s 00:06:11.852 14:42:54 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.852 14:42:54 -- common/autotest_common.sh@872 -- # size=4096 00:06:11.852 14:42:54 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.852 14:42:54 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:11.852 14:42:54 -- common/autotest_common.sh@875 -- # return 0 00:06:11.852 14:42:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.852 14:42:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.852 14:42:54 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:12.113 /dev/nbd1 00:06:12.113 14:42:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.113 14:42:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.113 14:42:54 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:12.113 14:42:54 -- common/autotest_common.sh@855 -- # local i 00:06:12.113 14:42:54 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:12.113 14:42:54 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:12.113 14:42:54 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:12.113 14:42:54 -- common/autotest_common.sh@859 -- # break 00:06:12.113 14:42:54 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:12.113 14:42:54 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:12.113 14:42:54 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.113 1+0 records in 00:06:12.113 1+0 records out 00:06:12.113 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031617 s, 13.0 MB/s 00:06:12.113 14:42:54 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.113 14:42:54 -- common/autotest_common.sh@872 -- # size=4096 00:06:12.113 14:42:54 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.113 14:42:54 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:12.113 14:42:54 -- common/autotest_common.sh@875 -- # return 0 00:06:12.113 14:42:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.113 14:42:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.113 14:42:54 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.113 14:42:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.113 14:42:54 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:12.375 { 00:06:12.375 "nbd_device": "/dev/nbd0", 00:06:12.375 "bdev_name": "Malloc0" 00:06:12.375 }, 00:06:12.375 { 00:06:12.375 "nbd_device": "/dev/nbd1", 00:06:12.375 "bdev_name": "Malloc1" 00:06:12.375 } 00:06:12.375 ]' 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:12.375 { 00:06:12.375 "nbd_device": "/dev/nbd0", 00:06:12.375 "bdev_name": "Malloc0" 00:06:12.375 }, 00:06:12.375 { 00:06:12.375 "nbd_device": "/dev/nbd1", 00:06:12.375 "bdev_name": "Malloc1" 00:06:12.375 } 00:06:12.375 ]' 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:12.375 /dev/nbd1' 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:12.375 /dev/nbd1' 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@65 -- # count=2 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@95 -- # count=2 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:12.375 256+0 records in 00:06:12.375 256+0 records out 00:06:12.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119478 s, 87.8 MB/s 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:12.375 256+0 records in 00:06:12.375 256+0 records out 00:06:12.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159581 s, 65.7 MB/s 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:12.375 256+0 records in 00:06:12.375 256+0 records out 00:06:12.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168276 s, 62.3 MB/s 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@51 -- # local i 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.375 14:42:54 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:12.636 14:42:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:12.636 14:42:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:12.636 14:42:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:12.636 14:42:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.636 14:42:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.636 14:42:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:12.636 14:42:55 -- bdev/nbd_common.sh@41 -- # break 00:06:12.636 14:42:55 -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.636 14:42:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.636 14:42:55 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.896 14:42:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:12.896 14:42:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:12.896 14:42:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:12.896 14:42:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.896 14:42:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.896 14:42:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:12.896 14:42:55 -- bdev/nbd_common.sh@41 -- # break 00:06:12.896 14:42:55 -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.896 14:42:55 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.896 14:42:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.896 14:42:55 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.896 14:42:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:12.896 14:42:55 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:12.896 14:42:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.896 14:42:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:12.896 14:42:55 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:12.896 14:42:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.896 14:42:55 -- bdev/nbd_common.sh@65 -- # true 00:06:12.896 14:42:55 -- bdev/nbd_common.sh@65 -- # count=0 00:06:12.896 14:42:55 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:12.896 14:42:55 -- bdev/nbd_common.sh@104 -- # count=0 00:06:12.896 14:42:55 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:12.896 14:42:55 -- bdev/nbd_common.sh@109 -- # return 0 00:06:12.896 14:42:55 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:13.156 14:42:55 -- event/event.sh@35 -- # sleep 3 00:06:13.156 [2024-04-26 14:42:55.817938] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.416 [2024-04-26 14:42:55.879749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.416 [2024-04-26 14:42:55.879749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.416 [2024-04-26 14:42:55.911983] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:13.416 [2024-04-26 14:42:55.912022] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:16.714 14:42:58 -- event/event.sh@38 -- # waitforlisten 868263 /var/tmp/spdk-nbd.sock 00:06:16.715 14:42:58 -- common/autotest_common.sh@817 -- # '[' -z 868263 ']' 00:06:16.715 14:42:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:16.715 14:42:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:16.715 14:42:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:16.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:16.715 14:42:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:16.715 14:42:58 -- common/autotest_common.sh@10 -- # set +x 00:06:16.715 14:42:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:16.715 14:42:58 -- common/autotest_common.sh@850 -- # return 0 00:06:16.715 14:42:58 -- event/event.sh@39 -- # killprocess 868263 00:06:16.715 14:42:58 -- common/autotest_common.sh@936 -- # '[' -z 868263 ']' 00:06:16.715 14:42:58 -- common/autotest_common.sh@940 -- # kill -0 868263 00:06:16.715 14:42:58 -- common/autotest_common.sh@941 -- # uname 00:06:16.715 14:42:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:16.715 14:42:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 868263 00:06:16.715 14:42:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:16.715 14:42:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:16.715 14:42:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 868263' 00:06:16.715 killing process with pid 868263 00:06:16.715 14:42:58 -- common/autotest_common.sh@955 -- # kill 868263 00:06:16.715 14:42:58 -- common/autotest_common.sh@960 -- # wait 868263 00:06:16.715 spdk_app_start is called in Round 0. 00:06:16.715 Shutdown signal received, stop current app iteration 00:06:16.715 Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 reinitialization... 00:06:16.715 spdk_app_start is called in Round 1. 00:06:16.715 Shutdown signal received, stop current app iteration 00:06:16.715 Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 reinitialization... 00:06:16.715 spdk_app_start is called in Round 2. 00:06:16.715 Shutdown signal received, stop current app iteration 00:06:16.715 Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 reinitialization... 00:06:16.715 spdk_app_start is called in Round 3. 00:06:16.715 Shutdown signal received, stop current app iteration 00:06:16.715 14:42:59 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:16.715 14:42:59 -- event/event.sh@42 -- # return 0 00:06:16.715 00:06:16.715 real 0m15.602s 00:06:16.715 user 0m33.756s 00:06:16.715 sys 0m2.051s 00:06:16.715 14:42:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:16.715 14:42:59 -- common/autotest_common.sh@10 -- # set +x 00:06:16.715 ************************************ 00:06:16.715 END TEST app_repeat 00:06:16.715 ************************************ 00:06:16.715 14:42:59 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:16.715 14:42:59 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:16.715 14:42:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:16.715 14:42:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.715 14:42:59 -- common/autotest_common.sh@10 -- # set +x 00:06:16.715 ************************************ 00:06:16.715 START TEST cpu_locks 00:06:16.715 ************************************ 00:06:16.715 14:42:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:16.715 * Looking for test storage... 00:06:16.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:16.715 14:42:59 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:16.715 14:42:59 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:16.715 14:42:59 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:16.715 14:42:59 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:16.715 14:42:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:16.715 14:42:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.715 14:42:59 -- common/autotest_common.sh@10 -- # set +x 00:06:16.975 ************************************ 00:06:16.975 START TEST default_locks 00:06:16.975 ************************************ 00:06:16.975 14:42:59 -- common/autotest_common.sh@1111 -- # default_locks 00:06:16.975 14:42:59 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=871850 00:06:16.975 14:42:59 -- event/cpu_locks.sh@47 -- # waitforlisten 871850 00:06:16.975 14:42:59 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.975 14:42:59 -- common/autotest_common.sh@817 -- # '[' -z 871850 ']' 00:06:16.975 14:42:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.975 14:42:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:16.976 14:42:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.976 14:42:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:16.976 14:42:59 -- common/autotest_common.sh@10 -- # set +x 00:06:16.976 [2024-04-26 14:42:59.527170] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:16.976 [2024-04-26 14:42:59.527216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid871850 ] 00:06:16.976 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.976 [2024-04-26 14:42:59.587458] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.235 [2024-04-26 14:42:59.650072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.805 14:43:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:17.805 14:43:00 -- common/autotest_common.sh@850 -- # return 0 00:06:17.805 14:43:00 -- event/cpu_locks.sh@49 -- # locks_exist 871850 00:06:17.805 14:43:00 -- event/cpu_locks.sh@22 -- # lslocks -p 871850 00:06:17.806 14:43:00 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.066 lslocks: write error 00:06:18.066 14:43:00 -- event/cpu_locks.sh@50 -- # killprocess 871850 00:06:18.066 14:43:00 -- common/autotest_common.sh@936 -- # '[' -z 871850 ']' 00:06:18.066 14:43:00 -- common/autotest_common.sh@940 -- # kill -0 871850 00:06:18.066 14:43:00 -- common/autotest_common.sh@941 -- # uname 00:06:18.066 14:43:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:18.066 14:43:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 871850 00:06:18.066 14:43:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:18.066 14:43:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:18.067 14:43:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 871850' 00:06:18.067 killing process with pid 871850 00:06:18.067 14:43:00 -- common/autotest_common.sh@955 -- # kill 871850 00:06:18.067 14:43:00 -- common/autotest_common.sh@960 -- # wait 871850 00:06:18.327 14:43:00 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 871850 00:06:18.327 14:43:00 -- common/autotest_common.sh@638 -- # local es=0 00:06:18.327 14:43:00 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 871850 00:06:18.327 14:43:00 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:18.327 14:43:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:18.327 14:43:00 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:18.327 14:43:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:18.327 14:43:00 -- common/autotest_common.sh@641 -- # waitforlisten 871850 00:06:18.327 14:43:00 -- common/autotest_common.sh@817 -- # '[' -z 871850 ']' 00:06:18.327 14:43:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.327 14:43:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:18.327 14:43:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.327 14:43:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:18.327 14:43:00 -- common/autotest_common.sh@10 -- # set +x 00:06:18.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (871850) - No such process 00:06:18.327 ERROR: process (pid: 871850) is no longer running 00:06:18.327 14:43:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:18.327 14:43:00 -- common/autotest_common.sh@850 -- # return 1 00:06:18.327 14:43:00 -- common/autotest_common.sh@641 -- # es=1 00:06:18.327 14:43:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:18.327 14:43:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:18.327 14:43:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:18.327 14:43:00 -- event/cpu_locks.sh@54 -- # no_locks 00:06:18.327 14:43:00 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:18.327 14:43:00 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:18.327 14:43:00 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:18.327 00:06:18.327 real 0m1.468s 00:06:18.327 user 0m1.573s 00:06:18.327 sys 0m0.465s 00:06:18.327 14:43:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:18.327 14:43:00 -- common/autotest_common.sh@10 -- # set +x 00:06:18.327 ************************************ 00:06:18.327 END TEST default_locks 00:06:18.327 ************************************ 00:06:18.327 14:43:00 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:18.327 14:43:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:18.327 14:43:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.328 14:43:00 -- common/autotest_common.sh@10 -- # set +x 00:06:18.588 ************************************ 00:06:18.588 START TEST default_locks_via_rpc 00:06:18.588 ************************************ 00:06:18.588 14:43:01 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:06:18.588 14:43:01 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=872215 00:06:18.588 14:43:01 -- event/cpu_locks.sh@63 -- # waitforlisten 872215 00:06:18.588 14:43:01 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.588 14:43:01 -- common/autotest_common.sh@817 -- # '[' -z 872215 ']' 00:06:18.588 14:43:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.588 14:43:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:18.588 14:43:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.588 14:43:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:18.588 14:43:01 -- common/autotest_common.sh@10 -- # set +x 00:06:18.588 [2024-04-26 14:43:01.186630] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:18.588 [2024-04-26 14:43:01.186688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid872215 ] 00:06:18.588 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.588 [2024-04-26 14:43:01.250653] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.848 [2024-04-26 14:43:01.323967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.419 14:43:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:19.419 14:43:01 -- common/autotest_common.sh@850 -- # return 0 00:06:19.419 14:43:01 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:19.419 14:43:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.419 14:43:01 -- common/autotest_common.sh@10 -- # set +x 00:06:19.419 14:43:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.419 14:43:01 -- event/cpu_locks.sh@67 -- # no_locks 00:06:19.419 14:43:01 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:19.419 14:43:01 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:19.419 14:43:01 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:19.419 14:43:01 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:19.419 14:43:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.419 14:43:01 -- common/autotest_common.sh@10 -- # set +x 00:06:19.419 14:43:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.419 14:43:01 -- event/cpu_locks.sh@71 -- # locks_exist 872215 00:06:19.419 14:43:01 -- event/cpu_locks.sh@22 -- # lslocks -p 872215 00:06:19.419 14:43:01 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.680 14:43:02 -- event/cpu_locks.sh@73 -- # killprocess 872215 00:06:19.680 14:43:02 -- common/autotest_common.sh@936 -- # '[' -z 872215 ']' 00:06:19.680 14:43:02 -- common/autotest_common.sh@940 -- # kill -0 872215 00:06:19.680 14:43:02 -- common/autotest_common.sh@941 -- # uname 00:06:19.680 14:43:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:19.680 14:43:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 872215 00:06:19.680 14:43:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:19.680 14:43:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:19.680 14:43:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 872215' 00:06:19.680 killing process with pid 872215 00:06:19.680 14:43:02 -- common/autotest_common.sh@955 -- # kill 872215 00:06:19.680 14:43:02 -- common/autotest_common.sh@960 -- # wait 872215 00:06:19.955 00:06:19.955 real 0m1.299s 00:06:19.955 user 0m1.379s 00:06:19.955 sys 0m0.429s 00:06:19.955 14:43:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:19.955 14:43:02 -- common/autotest_common.sh@10 -- # set +x 00:06:19.955 ************************************ 00:06:19.955 END TEST default_locks_via_rpc 00:06:19.955 ************************************ 00:06:19.955 14:43:02 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:19.955 14:43:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:19.955 14:43:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.955 14:43:02 -- common/autotest_common.sh@10 -- # set +x 00:06:19.955 ************************************ 00:06:19.955 START TEST non_locking_app_on_locked_coremask 00:06:19.955 ************************************ 00:06:19.955 14:43:02 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:06:19.955 14:43:02 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=872518 00:06:19.955 14:43:02 -- event/cpu_locks.sh@81 -- # waitforlisten 872518 /var/tmp/spdk.sock 00:06:19.955 14:43:02 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.955 14:43:02 -- common/autotest_common.sh@817 -- # '[' -z 872518 ']' 00:06:19.955 14:43:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.955 14:43:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:19.955 14:43:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.955 14:43:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:19.955 14:43:02 -- common/autotest_common.sh@10 -- # set +x 00:06:20.226 [2024-04-26 14:43:02.661241] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:20.226 [2024-04-26 14:43:02.661296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid872518 ] 00:06:20.226 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.226 [2024-04-26 14:43:02.726305] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.226 [2024-04-26 14:43:02.798835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.810 14:43:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:20.810 14:43:03 -- common/autotest_common.sh@850 -- # return 0 00:06:20.810 14:43:03 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=872609 00:06:20.810 14:43:03 -- event/cpu_locks.sh@85 -- # waitforlisten 872609 /var/tmp/spdk2.sock 00:06:20.810 14:43:03 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:20.810 14:43:03 -- common/autotest_common.sh@817 -- # '[' -z 872609 ']' 00:06:20.810 14:43:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.810 14:43:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:20.810 14:43:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.811 14:43:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:20.811 14:43:03 -- common/autotest_common.sh@10 -- # set +x 00:06:21.074 [2024-04-26 14:43:03.479802] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:21.074 [2024-04-26 14:43:03.479865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid872609 ] 00:06:21.074 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.074 [2024-04-26 14:43:03.568446] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.074 [2024-04-26 14:43:03.568477] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.074 [2024-04-26 14:43:03.695563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.647 14:43:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:21.647 14:43:04 -- common/autotest_common.sh@850 -- # return 0 00:06:21.647 14:43:04 -- event/cpu_locks.sh@87 -- # locks_exist 872518 00:06:21.647 14:43:04 -- event/cpu_locks.sh@22 -- # lslocks -p 872518 00:06:21.647 14:43:04 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.220 lslocks: write error 00:06:22.220 14:43:04 -- event/cpu_locks.sh@89 -- # killprocess 872518 00:06:22.220 14:43:04 -- common/autotest_common.sh@936 -- # '[' -z 872518 ']' 00:06:22.220 14:43:04 -- common/autotest_common.sh@940 -- # kill -0 872518 00:06:22.220 14:43:04 -- common/autotest_common.sh@941 -- # uname 00:06:22.220 14:43:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:22.220 14:43:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 872518 00:06:22.481 14:43:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:22.481 14:43:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:22.481 14:43:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 872518' 00:06:22.481 killing process with pid 872518 00:06:22.482 14:43:04 -- common/autotest_common.sh@955 -- # kill 872518 00:06:22.482 14:43:04 -- common/autotest_common.sh@960 -- # wait 872518 00:06:22.743 14:43:05 -- event/cpu_locks.sh@90 -- # killprocess 872609 00:06:22.743 14:43:05 -- common/autotest_common.sh@936 -- # '[' -z 872609 ']' 00:06:22.743 14:43:05 -- common/autotest_common.sh@940 -- # kill -0 872609 00:06:22.743 14:43:05 -- common/autotest_common.sh@941 -- # uname 00:06:22.743 14:43:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:22.743 14:43:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 872609 00:06:22.743 14:43:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:22.743 14:43:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:22.743 14:43:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 872609' 00:06:22.743 killing process with pid 872609 00:06:22.743 14:43:05 -- common/autotest_common.sh@955 -- # kill 872609 00:06:22.743 14:43:05 -- common/autotest_common.sh@960 -- # wait 872609 00:06:23.005 00:06:23.005 real 0m2.976s 00:06:23.005 user 0m3.257s 00:06:23.005 sys 0m0.896s 00:06:23.005 14:43:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:23.005 14:43:05 -- common/autotest_common.sh@10 -- # set +x 00:06:23.005 ************************************ 00:06:23.005 END TEST non_locking_app_on_locked_coremask 00:06:23.005 ************************************ 00:06:23.005 14:43:05 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:23.005 14:43:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:23.005 14:43:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.005 14:43:05 -- common/autotest_common.sh@10 -- # set +x 00:06:23.266 ************************************ 00:06:23.266 START TEST locking_app_on_unlocked_coremask 00:06:23.266 ************************************ 00:06:23.266 14:43:05 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:06:23.266 14:43:05 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=873135 00:06:23.266 14:43:05 -- event/cpu_locks.sh@99 -- # waitforlisten 873135 /var/tmp/spdk.sock 00:06:23.266 14:43:05 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:23.267 14:43:05 -- common/autotest_common.sh@817 -- # '[' -z 873135 ']' 00:06:23.267 14:43:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.267 14:43:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:23.267 14:43:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.267 14:43:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:23.267 14:43:05 -- common/autotest_common.sh@10 -- # set +x 00:06:23.267 [2024-04-26 14:43:05.824237] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:23.267 [2024-04-26 14:43:05.824294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid873135 ] 00:06:23.267 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.267 [2024-04-26 14:43:05.889894] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.267 [2024-04-26 14:43:05.889928] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.527 [2024-04-26 14:43:05.963259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.100 14:43:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:24.100 14:43:06 -- common/autotest_common.sh@850 -- # return 0 00:06:24.100 14:43:06 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=873322 00:06:24.100 14:43:06 -- event/cpu_locks.sh@103 -- # waitforlisten 873322 /var/tmp/spdk2.sock 00:06:24.100 14:43:06 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:24.100 14:43:06 -- common/autotest_common.sh@817 -- # '[' -z 873322 ']' 00:06:24.100 14:43:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.100 14:43:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:24.100 14:43:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.100 14:43:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:24.100 14:43:06 -- common/autotest_common.sh@10 -- # set +x 00:06:24.100 [2024-04-26 14:43:06.640186] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:24.100 [2024-04-26 14:43:06.640237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid873322 ] 00:06:24.100 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.100 [2024-04-26 14:43:06.728394] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.361 [2024-04-26 14:43:06.855878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.934 14:43:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:24.934 14:43:07 -- common/autotest_common.sh@850 -- # return 0 00:06:24.934 14:43:07 -- event/cpu_locks.sh@105 -- # locks_exist 873322 00:06:24.934 14:43:07 -- event/cpu_locks.sh@22 -- # lslocks -p 873322 00:06:24.934 14:43:07 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.195 lslocks: write error 00:06:25.195 14:43:07 -- event/cpu_locks.sh@107 -- # killprocess 873135 00:06:25.195 14:43:07 -- common/autotest_common.sh@936 -- # '[' -z 873135 ']' 00:06:25.195 14:43:07 -- common/autotest_common.sh@940 -- # kill -0 873135 00:06:25.195 14:43:07 -- common/autotest_common.sh@941 -- # uname 00:06:25.195 14:43:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:25.195 14:43:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 873135 00:06:25.456 14:43:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:25.456 14:43:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:25.456 14:43:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 873135' 00:06:25.456 killing process with pid 873135 00:06:25.456 14:43:07 -- common/autotest_common.sh@955 -- # kill 873135 00:06:25.456 14:43:07 -- common/autotest_common.sh@960 -- # wait 873135 00:06:25.716 14:43:08 -- event/cpu_locks.sh@108 -- # killprocess 873322 00:06:25.717 14:43:08 -- common/autotest_common.sh@936 -- # '[' -z 873322 ']' 00:06:25.717 14:43:08 -- common/autotest_common.sh@940 -- # kill -0 873322 00:06:25.717 14:43:08 -- common/autotest_common.sh@941 -- # uname 00:06:25.717 14:43:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:25.717 14:43:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 873322 00:06:25.717 14:43:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:25.717 14:43:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:25.717 14:43:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 873322' 00:06:25.717 killing process with pid 873322 00:06:25.717 14:43:08 -- common/autotest_common.sh@955 -- # kill 873322 00:06:25.717 14:43:08 -- common/autotest_common.sh@960 -- # wait 873322 00:06:25.978 00:06:25.978 real 0m2.812s 00:06:25.978 user 0m3.057s 00:06:25.978 sys 0m0.848s 00:06:25.978 14:43:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:25.978 14:43:08 -- common/autotest_common.sh@10 -- # set +x 00:06:25.978 ************************************ 00:06:25.978 END TEST locking_app_on_unlocked_coremask 00:06:25.978 ************************************ 00:06:25.978 14:43:08 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:25.978 14:43:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:25.978 14:43:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.978 14:43:08 -- common/autotest_common.sh@10 -- # set +x 00:06:26.241 ************************************ 00:06:26.241 START TEST locking_app_on_locked_coremask 00:06:26.241 ************************************ 00:06:26.241 14:43:08 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:06:26.241 14:43:08 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=873707 00:06:26.241 14:43:08 -- event/cpu_locks.sh@116 -- # waitforlisten 873707 /var/tmp/spdk.sock 00:06:26.241 14:43:08 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.241 14:43:08 -- common/autotest_common.sh@817 -- # '[' -z 873707 ']' 00:06:26.241 14:43:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.241 14:43:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:26.241 14:43:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.241 14:43:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:26.241 14:43:08 -- common/autotest_common.sh@10 -- # set +x 00:06:26.241 [2024-04-26 14:43:08.822484] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:26.241 [2024-04-26 14:43:08.822544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid873707 ] 00:06:26.241 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.241 [2024-04-26 14:43:08.888483] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.502 [2024-04-26 14:43:08.962452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.076 14:43:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:27.076 14:43:09 -- common/autotest_common.sh@850 -- # return 0 00:06:27.076 14:43:09 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=874035 00:06:27.076 14:43:09 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 874035 /var/tmp/spdk2.sock 00:06:27.076 14:43:09 -- common/autotest_common.sh@638 -- # local es=0 00:06:27.076 14:43:09 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:27.076 14:43:09 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 874035 /var/tmp/spdk2.sock 00:06:27.076 14:43:09 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:27.076 14:43:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:27.076 14:43:09 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:27.076 14:43:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:27.076 14:43:09 -- common/autotest_common.sh@641 -- # waitforlisten 874035 /var/tmp/spdk2.sock 00:06:27.076 14:43:09 -- common/autotest_common.sh@817 -- # '[' -z 874035 ']' 00:06:27.076 14:43:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.076 14:43:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:27.076 14:43:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.076 14:43:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:27.076 14:43:09 -- common/autotest_common.sh@10 -- # set +x 00:06:27.076 [2024-04-26 14:43:09.625111] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:27.076 [2024-04-26 14:43:09.625163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid874035 ] 00:06:27.076 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.076 [2024-04-26 14:43:09.711695] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 873707 has claimed it. 00:06:27.076 [2024-04-26 14:43:09.711731] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:27.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (874035) - No such process 00:06:27.647 ERROR: process (pid: 874035) is no longer running 00:06:27.647 14:43:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:27.647 14:43:10 -- common/autotest_common.sh@850 -- # return 1 00:06:27.647 14:43:10 -- common/autotest_common.sh@641 -- # es=1 00:06:27.647 14:43:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:27.647 14:43:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:27.647 14:43:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:27.647 14:43:10 -- event/cpu_locks.sh@122 -- # locks_exist 873707 00:06:27.647 14:43:10 -- event/cpu_locks.sh@22 -- # lslocks -p 873707 00:06:27.647 14:43:10 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.217 lslocks: write error 00:06:28.217 14:43:10 -- event/cpu_locks.sh@124 -- # killprocess 873707 00:06:28.217 14:43:10 -- common/autotest_common.sh@936 -- # '[' -z 873707 ']' 00:06:28.217 14:43:10 -- common/autotest_common.sh@940 -- # kill -0 873707 00:06:28.217 14:43:10 -- common/autotest_common.sh@941 -- # uname 00:06:28.217 14:43:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:28.217 14:43:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 873707 00:06:28.217 14:43:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:28.217 14:43:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:28.217 14:43:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 873707' 00:06:28.217 killing process with pid 873707 00:06:28.217 14:43:10 -- common/autotest_common.sh@955 -- # kill 873707 00:06:28.217 14:43:10 -- common/autotest_common.sh@960 -- # wait 873707 00:06:28.217 00:06:28.217 real 0m2.098s 00:06:28.217 user 0m2.341s 00:06:28.217 sys 0m0.572s 00:06:28.217 14:43:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:28.217 14:43:10 -- common/autotest_common.sh@10 -- # set +x 00:06:28.217 ************************************ 00:06:28.217 END TEST locking_app_on_locked_coremask 00:06:28.217 ************************************ 00:06:28.478 14:43:10 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:28.478 14:43:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:28.478 14:43:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.478 14:43:10 -- common/autotest_common.sh@10 -- # set +x 00:06:28.478 ************************************ 00:06:28.478 START TEST locking_overlapped_coremask 00:06:28.478 ************************************ 00:06:28.478 14:43:11 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:06:28.478 14:43:11 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=874328 00:06:28.478 14:43:11 -- event/cpu_locks.sh@133 -- # waitforlisten 874328 /var/tmp/spdk.sock 00:06:28.478 14:43:11 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:28.478 14:43:11 -- common/autotest_common.sh@817 -- # '[' -z 874328 ']' 00:06:28.478 14:43:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.478 14:43:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:28.478 14:43:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.478 14:43:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:28.478 14:43:11 -- common/autotest_common.sh@10 -- # set +x 00:06:28.478 [2024-04-26 14:43:11.095549] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:28.478 [2024-04-26 14:43:11.095608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid874328 ] 00:06:28.478 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.740 [2024-04-26 14:43:11.160032] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.740 [2024-04-26 14:43:11.233481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.740 [2024-04-26 14:43:11.233598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.740 [2024-04-26 14:43:11.233601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.311 14:43:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:29.311 14:43:11 -- common/autotest_common.sh@850 -- # return 0 00:06:29.311 14:43:11 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=874423 00:06:29.311 14:43:11 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 874423 /var/tmp/spdk2.sock 00:06:29.311 14:43:11 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:29.311 14:43:11 -- common/autotest_common.sh@638 -- # local es=0 00:06:29.311 14:43:11 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 874423 /var/tmp/spdk2.sock 00:06:29.311 14:43:11 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:29.311 14:43:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:29.311 14:43:11 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:29.311 14:43:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:29.311 14:43:11 -- common/autotest_common.sh@641 -- # waitforlisten 874423 /var/tmp/spdk2.sock 00:06:29.311 14:43:11 -- common/autotest_common.sh@817 -- # '[' -z 874423 ']' 00:06:29.311 14:43:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.311 14:43:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:29.311 14:43:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.311 14:43:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:29.311 14:43:11 -- common/autotest_common.sh@10 -- # set +x 00:06:29.311 [2024-04-26 14:43:11.914434] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:29.311 [2024-04-26 14:43:11.914486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid874423 ] 00:06:29.311 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.571 [2024-04-26 14:43:11.985777] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 874328 has claimed it. 00:06:29.571 [2024-04-26 14:43:11.985807] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:30.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (874423) - No such process 00:06:30.143 ERROR: process (pid: 874423) is no longer running 00:06:30.143 14:43:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:30.143 14:43:12 -- common/autotest_common.sh@850 -- # return 1 00:06:30.143 14:43:12 -- common/autotest_common.sh@641 -- # es=1 00:06:30.143 14:43:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:30.143 14:43:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:30.143 14:43:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:30.143 14:43:12 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:30.143 14:43:12 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:30.143 14:43:12 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:30.143 14:43:12 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:30.143 14:43:12 -- event/cpu_locks.sh@141 -- # killprocess 874328 00:06:30.143 14:43:12 -- common/autotest_common.sh@936 -- # '[' -z 874328 ']' 00:06:30.143 14:43:12 -- common/autotest_common.sh@940 -- # kill -0 874328 00:06:30.143 14:43:12 -- common/autotest_common.sh@941 -- # uname 00:06:30.143 14:43:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:30.144 14:43:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 874328 00:06:30.144 14:43:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:30.144 14:43:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:30.144 14:43:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 874328' 00:06:30.144 killing process with pid 874328 00:06:30.144 14:43:12 -- common/autotest_common.sh@955 -- # kill 874328 00:06:30.144 14:43:12 -- common/autotest_common.sh@960 -- # wait 874328 00:06:30.144 00:06:30.144 real 0m1.753s 00:06:30.144 user 0m4.941s 00:06:30.144 sys 0m0.374s 00:06:30.144 14:43:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:30.144 14:43:12 -- common/autotest_common.sh@10 -- # set +x 00:06:30.144 ************************************ 00:06:30.144 END TEST locking_overlapped_coremask 00:06:30.144 ************************************ 00:06:30.405 14:43:12 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:30.405 14:43:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:30.405 14:43:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.405 14:43:12 -- common/autotest_common.sh@10 -- # set +x 00:06:30.405 ************************************ 00:06:30.405 START TEST locking_overlapped_coremask_via_rpc 00:06:30.405 ************************************ 00:06:30.405 14:43:12 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:06:30.405 14:43:12 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=874788 00:06:30.405 14:43:12 -- event/cpu_locks.sh@149 -- # waitforlisten 874788 /var/tmp/spdk.sock 00:06:30.405 14:43:12 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:30.405 14:43:12 -- common/autotest_common.sh@817 -- # '[' -z 874788 ']' 00:06:30.405 14:43:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.405 14:43:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:30.405 14:43:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.405 14:43:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:30.405 14:43:12 -- common/autotest_common.sh@10 -- # set +x 00:06:30.405 [2024-04-26 14:43:13.043096] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:30.405 [2024-04-26 14:43:13.043153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid874788 ] 00:06:30.667 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.667 [2024-04-26 14:43:13.107618] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.667 [2024-04-26 14:43:13.107649] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.667 [2024-04-26 14:43:13.181347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.667 [2024-04-26 14:43:13.181465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.667 [2024-04-26 14:43:13.181468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.237 14:43:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:31.237 14:43:13 -- common/autotest_common.sh@850 -- # return 0 00:06:31.237 14:43:13 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=874806 00:06:31.237 14:43:13 -- event/cpu_locks.sh@153 -- # waitforlisten 874806 /var/tmp/spdk2.sock 00:06:31.237 14:43:13 -- common/autotest_common.sh@817 -- # '[' -z 874806 ']' 00:06:31.237 14:43:13 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:31.237 14:43:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.237 14:43:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:31.237 14:43:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.237 14:43:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:31.237 14:43:13 -- common/autotest_common.sh@10 -- # set +x 00:06:31.237 [2024-04-26 14:43:13.864362] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:31.237 [2024-04-26 14:43:13.864413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid874806 ] 00:06:31.237 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.498 [2024-04-26 14:43:13.934855] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.498 [2024-04-26 14:43:13.934875] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.498 [2024-04-26 14:43:14.038677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.498 [2024-04-26 14:43:14.041902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.498 [2024-04-26 14:43:14.041904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:32.071 14:43:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:32.071 14:43:14 -- common/autotest_common.sh@850 -- # return 0 00:06:32.071 14:43:14 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:32.071 14:43:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:32.071 14:43:14 -- common/autotest_common.sh@10 -- # set +x 00:06:32.071 14:43:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:32.071 14:43:14 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:32.072 14:43:14 -- common/autotest_common.sh@638 -- # local es=0 00:06:32.072 14:43:14 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:32.072 14:43:14 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:06:32.072 14:43:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:32.072 14:43:14 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:06:32.072 14:43:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:32.072 14:43:14 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:32.072 14:43:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:32.072 14:43:14 -- common/autotest_common.sh@10 -- # set +x 00:06:32.072 [2024-04-26 14:43:14.644898] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 874788 has claimed it. 00:06:32.072 request: 00:06:32.072 { 00:06:32.072 "method": "framework_enable_cpumask_locks", 00:06:32.072 "req_id": 1 00:06:32.072 } 00:06:32.072 Got JSON-RPC error response 00:06:32.072 response: 00:06:32.072 { 00:06:32.072 "code": -32603, 00:06:32.072 "message": "Failed to claim CPU core: 2" 00:06:32.072 } 00:06:32.072 14:43:14 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:06:32.072 14:43:14 -- common/autotest_common.sh@641 -- # es=1 00:06:32.072 14:43:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:32.072 14:43:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:32.072 14:43:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:32.072 14:43:14 -- event/cpu_locks.sh@158 -- # waitforlisten 874788 /var/tmp/spdk.sock 00:06:32.072 14:43:14 -- common/autotest_common.sh@817 -- # '[' -z 874788 ']' 00:06:32.072 14:43:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.072 14:43:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:32.072 14:43:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.072 14:43:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:32.072 14:43:14 -- common/autotest_common.sh@10 -- # set +x 00:06:32.333 14:43:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:32.333 14:43:14 -- common/autotest_common.sh@850 -- # return 0 00:06:32.333 14:43:14 -- event/cpu_locks.sh@159 -- # waitforlisten 874806 /var/tmp/spdk2.sock 00:06:32.333 14:43:14 -- common/autotest_common.sh@817 -- # '[' -z 874806 ']' 00:06:32.333 14:43:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.333 14:43:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:32.333 14:43:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.333 14:43:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:32.333 14:43:14 -- common/autotest_common.sh@10 -- # set +x 00:06:32.333 14:43:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:32.333 14:43:14 -- common/autotest_common.sh@850 -- # return 0 00:06:32.333 14:43:14 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:32.333 14:43:14 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:32.333 14:43:14 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:32.334 14:43:14 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:32.334 00:06:32.334 real 0m2.008s 00:06:32.334 user 0m0.777s 00:06:32.334 sys 0m0.157s 00:06:32.334 14:43:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:32.334 14:43:14 -- common/autotest_common.sh@10 -- # set +x 00:06:32.334 ************************************ 00:06:32.334 END TEST locking_overlapped_coremask_via_rpc 00:06:32.334 ************************************ 00:06:32.595 14:43:15 -- event/cpu_locks.sh@174 -- # cleanup 00:06:32.595 14:43:15 -- event/cpu_locks.sh@15 -- # [[ -z 874788 ]] 00:06:32.595 14:43:15 -- event/cpu_locks.sh@15 -- # killprocess 874788 00:06:32.595 14:43:15 -- common/autotest_common.sh@936 -- # '[' -z 874788 ']' 00:06:32.595 14:43:15 -- common/autotest_common.sh@940 -- # kill -0 874788 00:06:32.595 14:43:15 -- common/autotest_common.sh@941 -- # uname 00:06:32.595 14:43:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:32.595 14:43:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 874788 00:06:32.595 14:43:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:32.595 14:43:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:32.595 14:43:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 874788' 00:06:32.595 killing process with pid 874788 00:06:32.595 14:43:15 -- common/autotest_common.sh@955 -- # kill 874788 00:06:32.595 14:43:15 -- common/autotest_common.sh@960 -- # wait 874788 00:06:32.857 14:43:15 -- event/cpu_locks.sh@16 -- # [[ -z 874806 ]] 00:06:32.857 14:43:15 -- event/cpu_locks.sh@16 -- # killprocess 874806 00:06:32.857 14:43:15 -- common/autotest_common.sh@936 -- # '[' -z 874806 ']' 00:06:32.857 14:43:15 -- common/autotest_common.sh@940 -- # kill -0 874806 00:06:32.857 14:43:15 -- common/autotest_common.sh@941 -- # uname 00:06:32.857 14:43:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:32.857 14:43:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 874806 00:06:32.857 14:43:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:32.857 14:43:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:32.857 14:43:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 874806' 00:06:32.857 killing process with pid 874806 00:06:32.857 14:43:15 -- common/autotest_common.sh@955 -- # kill 874806 00:06:32.857 14:43:15 -- common/autotest_common.sh@960 -- # wait 874806 00:06:33.118 14:43:15 -- event/cpu_locks.sh@18 -- # rm -f 00:06:33.118 14:43:15 -- event/cpu_locks.sh@1 -- # cleanup 00:06:33.118 14:43:15 -- event/cpu_locks.sh@15 -- # [[ -z 874788 ]] 00:06:33.118 14:43:15 -- event/cpu_locks.sh@15 -- # killprocess 874788 00:06:33.118 14:43:15 -- common/autotest_common.sh@936 -- # '[' -z 874788 ']' 00:06:33.118 14:43:15 -- common/autotest_common.sh@940 -- # kill -0 874788 00:06:33.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (874788) - No such process 00:06:33.118 14:43:15 -- common/autotest_common.sh@963 -- # echo 'Process with pid 874788 is not found' 00:06:33.118 Process with pid 874788 is not found 00:06:33.118 14:43:15 -- event/cpu_locks.sh@16 -- # [[ -z 874806 ]] 00:06:33.118 14:43:15 -- event/cpu_locks.sh@16 -- # killprocess 874806 00:06:33.118 14:43:15 -- common/autotest_common.sh@936 -- # '[' -z 874806 ']' 00:06:33.118 14:43:15 -- common/autotest_common.sh@940 -- # kill -0 874806 00:06:33.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (874806) - No such process 00:06:33.118 14:43:15 -- common/autotest_common.sh@963 -- # echo 'Process with pid 874806 is not found' 00:06:33.118 Process with pid 874806 is not found 00:06:33.118 14:43:15 -- event/cpu_locks.sh@18 -- # rm -f 00:06:33.118 00:06:33.118 real 0m16.331s 00:06:33.118 user 0m27.133s 00:06:33.118 sys 0m4.979s 00:06:33.118 14:43:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:33.118 14:43:15 -- common/autotest_common.sh@10 -- # set +x 00:06:33.118 ************************************ 00:06:33.118 END TEST cpu_locks 00:06:33.118 ************************************ 00:06:33.118 00:06:33.118 real 0m43.868s 00:06:33.118 user 1m22.001s 00:06:33.118 sys 0m8.339s 00:06:33.118 14:43:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:33.118 14:43:15 -- common/autotest_common.sh@10 -- # set +x 00:06:33.118 ************************************ 00:06:33.118 END TEST event 00:06:33.118 ************************************ 00:06:33.119 14:43:15 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:33.119 14:43:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:33.119 14:43:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.119 14:43:15 -- common/autotest_common.sh@10 -- # set +x 00:06:33.119 ************************************ 00:06:33.119 START TEST thread 00:06:33.119 ************************************ 00:06:33.119 14:43:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:33.380 * Looking for test storage... 00:06:33.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:33.380 14:43:15 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:33.380 14:43:15 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:33.380 14:43:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.380 14:43:15 -- common/autotest_common.sh@10 -- # set +x 00:06:33.380 ************************************ 00:06:33.380 START TEST thread_poller_perf 00:06:33.380 ************************************ 00:06:33.380 14:43:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:33.380 [2024-04-26 14:43:16.041942] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:33.380 [2024-04-26 14:43:16.042027] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid875423 ] 00:06:33.641 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.641 [2024-04-26 14:43:16.108058] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.641 [2024-04-26 14:43:16.173509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.641 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:34.583 ====================================== 00:06:34.583 busy:2408535248 (cyc) 00:06:34.584 total_run_count: 287000 00:06:34.584 tsc_hz: 2400000000 (cyc) 00:06:34.584 ====================================== 00:06:34.584 poller_cost: 8392 (cyc), 3496 (nsec) 00:06:34.584 00:06:34.584 real 0m1.213s 00:06:34.584 user 0m1.132s 00:06:34.584 sys 0m0.077s 00:06:34.584 14:43:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:34.584 14:43:17 -- common/autotest_common.sh@10 -- # set +x 00:06:34.584 ************************************ 00:06:34.584 END TEST thread_poller_perf 00:06:34.584 ************************************ 00:06:34.844 14:43:17 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:34.844 14:43:17 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:34.844 14:43:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.844 14:43:17 -- common/autotest_common.sh@10 -- # set +x 00:06:34.844 ************************************ 00:06:34.844 START TEST thread_poller_perf 00:06:34.844 ************************************ 00:06:34.844 14:43:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:34.844 [2024-04-26 14:43:17.431871] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:34.844 [2024-04-26 14:43:17.431961] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid875632 ] 00:06:34.845 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.845 [2024-04-26 14:43:17.496375] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.105 [2024-04-26 14:43:17.559406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.105 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:36.045 ====================================== 00:06:36.045 busy:2402308772 (cyc) 00:06:36.045 total_run_count: 3814000 00:06:36.045 tsc_hz: 2400000000 (cyc) 00:06:36.045 ====================================== 00:06:36.045 poller_cost: 629 (cyc), 262 (nsec) 00:06:36.045 00:06:36.045 real 0m1.202s 00:06:36.045 user 0m1.131s 00:06:36.045 sys 0m0.067s 00:06:36.045 14:43:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:36.045 14:43:18 -- common/autotest_common.sh@10 -- # set +x 00:06:36.045 ************************************ 00:06:36.045 END TEST thread_poller_perf 00:06:36.045 ************************************ 00:06:36.045 14:43:18 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:36.045 00:06:36.045 real 0m2.880s 00:06:36.045 user 0m2.438s 00:06:36.045 sys 0m0.412s 00:06:36.045 14:43:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:36.045 14:43:18 -- common/autotest_common.sh@10 -- # set +x 00:06:36.045 ************************************ 00:06:36.045 END TEST thread 00:06:36.045 ************************************ 00:06:36.045 14:43:18 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:36.045 14:43:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:36.045 14:43:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.045 14:43:18 -- common/autotest_common.sh@10 -- # set +x 00:06:36.306 ************************************ 00:06:36.306 START TEST accel 00:06:36.306 ************************************ 00:06:36.306 14:43:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:36.306 * Looking for test storage... 00:06:36.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:36.306 14:43:18 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:36.306 14:43:18 -- accel/accel.sh@82 -- # get_expected_opcs 00:06:36.306 14:43:18 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:36.306 14:43:18 -- accel/accel.sh@62 -- # spdk_tgt_pid=876021 00:06:36.306 14:43:18 -- accel/accel.sh@63 -- # waitforlisten 876021 00:06:36.306 14:43:18 -- common/autotest_common.sh@817 -- # '[' -z 876021 ']' 00:06:36.306 14:43:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.306 14:43:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:36.306 14:43:18 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:36.306 14:43:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.306 14:43:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:36.306 14:43:18 -- accel/accel.sh@61 -- # build_accel_config 00:06:36.306 14:43:18 -- common/autotest_common.sh@10 -- # set +x 00:06:36.306 14:43:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.306 14:43:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.306 14:43:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.306 14:43:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.306 14:43:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.306 14:43:18 -- accel/accel.sh@40 -- # local IFS=, 00:06:36.306 14:43:18 -- accel/accel.sh@41 -- # jq -r . 00:06:36.566 [2024-04-26 14:43:19.002885] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:36.566 [2024-04-26 14:43:19.002937] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid876021 ] 00:06:36.566 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.566 [2024-04-26 14:43:19.065159] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.566 [2024-04-26 14:43:19.132507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.135 14:43:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:37.135 14:43:19 -- common/autotest_common.sh@850 -- # return 0 00:06:37.135 14:43:19 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:37.135 14:43:19 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:37.135 14:43:19 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:37.135 14:43:19 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:37.135 14:43:19 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:37.135 14:43:19 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:37.135 14:43:19 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:37.135 14:43:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:37.135 14:43:19 -- common/autotest_common.sh@10 -- # set +x 00:06:37.135 14:43:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:37.135 14:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.135 14:43:19 -- accel/accel.sh@72 -- # IFS== 00:06:37.135 14:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:06:37.135 14:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.135 14:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.135 14:43:19 -- accel/accel.sh@72 -- # IFS== 00:06:37.135 14:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:06:37.135 14:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.135 14:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.135 14:43:19 -- accel/accel.sh@72 -- # IFS== 00:06:37.135 14:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:06:37.135 14:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.135 14:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.135 14:43:19 -- accel/accel.sh@72 -- # IFS== 00:06:37.135 14:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:06:37.135 14:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.135 14:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.135 14:43:19 -- accel/accel.sh@72 -- # IFS== 00:06:37.135 14:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:06:37.395 14:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.395 14:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.395 14:43:19 -- accel/accel.sh@72 -- # IFS== 00:06:37.395 14:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:06:37.395 14:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.395 14:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.395 14:43:19 -- accel/accel.sh@72 -- # IFS== 00:06:37.395 14:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:06:37.395 14:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.395 14:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.395 14:43:19 -- accel/accel.sh@72 -- # IFS== 00:06:37.395 14:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:06:37.395 14:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.395 14:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.395 14:43:19 -- accel/accel.sh@72 -- # IFS== 00:06:37.395 14:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:06:37.395 14:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.395 14:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.395 14:43:19 -- accel/accel.sh@72 -- # IFS== 00:06:37.395 14:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:06:37.395 14:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.395 14:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.395 14:43:19 -- accel/accel.sh@72 -- # IFS== 00:06:37.395 14:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:06:37.395 14:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.395 14:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.395 14:43:19 -- accel/accel.sh@72 -- # IFS== 00:06:37.395 14:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:06:37.395 14:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.395 14:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.395 14:43:19 -- accel/accel.sh@72 -- # IFS== 00:06:37.395 14:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:06:37.395 14:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.395 14:43:19 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:37.395 14:43:19 -- accel/accel.sh@72 -- # IFS== 00:06:37.395 14:43:19 -- accel/accel.sh@72 -- # read -r opc module 00:06:37.395 14:43:19 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:37.395 14:43:19 -- accel/accel.sh@75 -- # killprocess 876021 00:06:37.395 14:43:19 -- common/autotest_common.sh@936 -- # '[' -z 876021 ']' 00:06:37.395 14:43:19 -- common/autotest_common.sh@940 -- # kill -0 876021 00:06:37.395 14:43:19 -- common/autotest_common.sh@941 -- # uname 00:06:37.395 14:43:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:37.395 14:43:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 876021 00:06:37.395 14:43:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:37.395 14:43:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:37.395 14:43:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 876021' 00:06:37.395 killing process with pid 876021 00:06:37.395 14:43:19 -- common/autotest_common.sh@955 -- # kill 876021 00:06:37.395 14:43:19 -- common/autotest_common.sh@960 -- # wait 876021 00:06:37.654 14:43:20 -- accel/accel.sh@76 -- # trap - ERR 00:06:37.654 14:43:20 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:37.654 14:43:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:37.654 14:43:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.654 14:43:20 -- common/autotest_common.sh@10 -- # set +x 00:06:37.654 14:43:20 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:06:37.654 14:43:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:37.654 14:43:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.654 14:43:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.654 14:43:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.654 14:43:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.654 14:43:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.654 14:43:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.654 14:43:20 -- accel/accel.sh@40 -- # local IFS=, 00:06:37.654 14:43:20 -- accel/accel.sh@41 -- # jq -r . 00:06:37.654 14:43:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:37.654 14:43:20 -- common/autotest_common.sh@10 -- # set +x 00:06:37.654 14:43:20 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:37.654 14:43:20 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:37.654 14:43:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.655 14:43:20 -- common/autotest_common.sh@10 -- # set +x 00:06:37.914 ************************************ 00:06:37.914 START TEST accel_missing_filename 00:06:37.914 ************************************ 00:06:37.914 14:43:20 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:06:37.914 14:43:20 -- common/autotest_common.sh@638 -- # local es=0 00:06:37.914 14:43:20 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:37.914 14:43:20 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:37.914 14:43:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:37.914 14:43:20 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:37.914 14:43:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:37.914 14:43:20 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:06:37.914 14:43:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:37.914 14:43:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.914 14:43:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.914 14:43:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.914 14:43:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.914 14:43:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.914 14:43:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.914 14:43:20 -- accel/accel.sh@40 -- # local IFS=, 00:06:37.914 14:43:20 -- accel/accel.sh@41 -- # jq -r . 00:06:37.914 [2024-04-26 14:43:20.437116] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:37.914 [2024-04-26 14:43:20.437221] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid876401 ] 00:06:37.914 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.914 [2024-04-26 14:43:20.502425] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.914 [2024-04-26 14:43:20.574203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.173 [2024-04-26 14:43:20.606543] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:38.173 [2024-04-26 14:43:20.643771] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:38.173 A filename is required. 00:06:38.173 14:43:20 -- common/autotest_common.sh@641 -- # es=234 00:06:38.173 14:43:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:38.173 14:43:20 -- common/autotest_common.sh@650 -- # es=106 00:06:38.173 14:43:20 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:38.173 14:43:20 -- common/autotest_common.sh@658 -- # es=1 00:06:38.173 14:43:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:38.173 00:06:38.173 real 0m0.290s 00:06:38.173 user 0m0.225s 00:06:38.173 sys 0m0.107s 00:06:38.173 14:43:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:38.173 14:43:20 -- common/autotest_common.sh@10 -- # set +x 00:06:38.173 ************************************ 00:06:38.173 END TEST accel_missing_filename 00:06:38.173 ************************************ 00:06:38.173 14:43:20 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:38.173 14:43:20 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:38.173 14:43:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.173 14:43:20 -- common/autotest_common.sh@10 -- # set +x 00:06:38.433 ************************************ 00:06:38.433 START TEST accel_compress_verify 00:06:38.433 ************************************ 00:06:38.433 14:43:20 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:38.433 14:43:20 -- common/autotest_common.sh@638 -- # local es=0 00:06:38.433 14:43:20 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:38.433 14:43:20 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:38.433 14:43:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:38.433 14:43:20 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:38.434 14:43:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:38.434 14:43:20 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:38.434 14:43:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:38.434 14:43:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.434 14:43:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.434 14:43:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.434 14:43:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.434 14:43:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.434 14:43:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.434 14:43:20 -- accel/accel.sh@40 -- # local IFS=, 00:06:38.434 14:43:20 -- accel/accel.sh@41 -- # jq -r . 00:06:38.434 [2024-04-26 14:43:20.912794] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:38.434 [2024-04-26 14:43:20.912879] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid876465 ] 00:06:38.434 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.434 [2024-04-26 14:43:20.979081] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.434 [2024-04-26 14:43:21.052769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.434 [2024-04-26 14:43:21.085271] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:38.694 [2024-04-26 14:43:21.122487] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:38.694 00:06:38.694 Compression does not support the verify option, aborting. 00:06:38.694 14:43:21 -- common/autotest_common.sh@641 -- # es=161 00:06:38.694 14:43:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:38.694 14:43:21 -- common/autotest_common.sh@650 -- # es=33 00:06:38.694 14:43:21 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:38.694 14:43:21 -- common/autotest_common.sh@658 -- # es=1 00:06:38.694 14:43:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:38.694 00:06:38.694 real 0m0.294s 00:06:38.694 user 0m0.225s 00:06:38.694 sys 0m0.110s 00:06:38.694 14:43:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:38.694 14:43:21 -- common/autotest_common.sh@10 -- # set +x 00:06:38.694 ************************************ 00:06:38.694 END TEST accel_compress_verify 00:06:38.694 ************************************ 00:06:38.694 14:43:21 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:38.694 14:43:21 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:38.694 14:43:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.694 14:43:21 -- common/autotest_common.sh@10 -- # set +x 00:06:38.694 ************************************ 00:06:38.694 START TEST accel_wrong_workload 00:06:38.694 ************************************ 00:06:38.694 14:43:21 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:06:38.694 14:43:21 -- common/autotest_common.sh@638 -- # local es=0 00:06:38.694 14:43:21 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:38.694 14:43:21 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:38.694 14:43:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:38.695 14:43:21 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:38.695 14:43:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:38.695 14:43:21 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:06:38.695 14:43:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:38.695 14:43:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.695 14:43:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.695 14:43:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.695 14:43:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.695 14:43:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.695 14:43:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.695 14:43:21 -- accel/accel.sh@40 -- # local IFS=, 00:06:38.695 14:43:21 -- accel/accel.sh@41 -- # jq -r . 00:06:38.956 Unsupported workload type: foobar 00:06:38.956 [2024-04-26 14:43:21.380913] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:38.956 accel_perf options: 00:06:38.956 [-h help message] 00:06:38.956 [-q queue depth per core] 00:06:38.956 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:38.956 [-T number of threads per core 00:06:38.956 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:38.956 [-t time in seconds] 00:06:38.956 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:38.956 [ dif_verify, , dif_generate, dif_generate_copy 00:06:38.956 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:38.956 [-l for compress/decompress workloads, name of uncompressed input file 00:06:38.956 [-S for crc32c workload, use this seed value (default 0) 00:06:38.956 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:38.956 [-f for fill workload, use this BYTE value (default 255) 00:06:38.956 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:38.956 [-y verify result if this switch is on] 00:06:38.956 [-a tasks to allocate per core (default: same value as -q)] 00:06:38.956 Can be used to spread operations across a wider range of memory. 00:06:38.956 14:43:21 -- common/autotest_common.sh@641 -- # es=1 00:06:38.956 14:43:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:38.956 14:43:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:38.956 14:43:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:38.956 00:06:38.956 real 0m0.037s 00:06:38.956 user 0m0.024s 00:06:38.956 sys 0m0.013s 00:06:38.956 14:43:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:38.956 14:43:21 -- common/autotest_common.sh@10 -- # set +x 00:06:38.956 ************************************ 00:06:38.956 END TEST accel_wrong_workload 00:06:38.956 ************************************ 00:06:38.956 Error: writing output failed: Broken pipe 00:06:38.956 14:43:21 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:38.956 14:43:21 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:38.956 14:43:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.956 14:43:21 -- common/autotest_common.sh@10 -- # set +x 00:06:38.956 ************************************ 00:06:38.956 START TEST accel_negative_buffers 00:06:38.956 ************************************ 00:06:38.956 14:43:21 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:38.956 14:43:21 -- common/autotest_common.sh@638 -- # local es=0 00:06:38.956 14:43:21 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:38.956 14:43:21 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:38.956 14:43:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:38.956 14:43:21 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:38.956 14:43:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:38.956 14:43:21 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:06:38.956 14:43:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:38.956 14:43:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.956 14:43:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.956 14:43:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.956 14:43:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.956 14:43:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.956 14:43:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.956 14:43:21 -- accel/accel.sh@40 -- # local IFS=, 00:06:38.956 14:43:21 -- accel/accel.sh@41 -- # jq -r . 00:06:38.956 -x option must be non-negative. 00:06:38.956 [2024-04-26 14:43:21.606857] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:38.956 accel_perf options: 00:06:38.956 [-h help message] 00:06:38.956 [-q queue depth per core] 00:06:38.956 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:38.956 [-T number of threads per core 00:06:38.956 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:38.956 [-t time in seconds] 00:06:38.956 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:38.956 [ dif_verify, , dif_generate, dif_generate_copy 00:06:38.956 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:38.956 [-l for compress/decompress workloads, name of uncompressed input file 00:06:38.956 [-S for crc32c workload, use this seed value (default 0) 00:06:38.956 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:38.956 [-f for fill workload, use this BYTE value (default 255) 00:06:38.956 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:38.956 [-y verify result if this switch is on] 00:06:38.956 [-a tasks to allocate per core (default: same value as -q)] 00:06:38.956 Can be used to spread operations across a wider range of memory. 00:06:38.956 14:43:21 -- common/autotest_common.sh@641 -- # es=1 00:06:38.956 14:43:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:38.956 14:43:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:38.956 14:43:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:38.956 00:06:38.956 real 0m0.037s 00:06:38.956 user 0m0.022s 00:06:38.956 sys 0m0.014s 00:06:38.956 14:43:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:38.956 14:43:21 -- common/autotest_common.sh@10 -- # set +x 00:06:38.956 ************************************ 00:06:38.956 END TEST accel_negative_buffers 00:06:38.956 ************************************ 00:06:39.218 Error: writing output failed: Broken pipe 00:06:39.218 14:43:21 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:39.218 14:43:21 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:39.218 14:43:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.218 14:43:21 -- common/autotest_common.sh@10 -- # set +x 00:06:39.218 ************************************ 00:06:39.218 START TEST accel_crc32c 00:06:39.218 ************************************ 00:06:39.218 14:43:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:39.218 14:43:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:39.218 14:43:21 -- accel/accel.sh@17 -- # local accel_module 00:06:39.218 14:43:21 -- accel/accel.sh@19 -- # IFS=: 00:06:39.218 14:43:21 -- accel/accel.sh@19 -- # read -r var val 00:06:39.218 14:43:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:39.218 14:43:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:39.218 14:43:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.218 14:43:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.218 14:43:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.218 14:43:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.218 14:43:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.218 14:43:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.218 14:43:21 -- accel/accel.sh@40 -- # local IFS=, 00:06:39.218 14:43:21 -- accel/accel.sh@41 -- # jq -r . 00:06:39.218 [2024-04-26 14:43:21.821180] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:39.218 [2024-04-26 14:43:21.821276] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid876841 ] 00:06:39.218 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.479 [2024-04-26 14:43:21.888407] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.479 [2024-04-26 14:43:21.959591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.479 14:43:21 -- accel/accel.sh@20 -- # val= 00:06:39.479 14:43:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.479 14:43:21 -- accel/accel.sh@19 -- # IFS=: 00:06:39.479 14:43:21 -- accel/accel.sh@19 -- # read -r var val 00:06:39.479 14:43:21 -- accel/accel.sh@20 -- # val= 00:06:39.479 14:43:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.479 14:43:21 -- accel/accel.sh@19 -- # IFS=: 00:06:39.479 14:43:21 -- accel/accel.sh@19 -- # read -r var val 00:06:39.479 14:43:21 -- accel/accel.sh@20 -- # val=0x1 00:06:39.479 14:43:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.479 14:43:21 -- accel/accel.sh@19 -- # IFS=: 00:06:39.479 14:43:21 -- accel/accel.sh@19 -- # read -r var val 00:06:39.479 14:43:21 -- accel/accel.sh@20 -- # val= 00:06:39.479 14:43:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.479 14:43:21 -- accel/accel.sh@19 -- # IFS=: 00:06:39.479 14:43:21 -- accel/accel.sh@19 -- # read -r var val 00:06:39.479 14:43:21 -- accel/accel.sh@20 -- # val= 00:06:39.479 14:43:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.479 14:43:21 -- accel/accel.sh@19 -- # IFS=: 00:06:39.479 14:43:21 -- accel/accel.sh@19 -- # read -r var val 00:06:39.479 14:43:21 -- accel/accel.sh@20 -- # val=crc32c 00:06:39.479 14:43:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.479 14:43:21 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:39.479 14:43:21 -- accel/accel.sh@19 -- # IFS=: 00:06:39.479 14:43:21 -- accel/accel.sh@19 -- # read -r var val 00:06:39.479 14:43:21 -- accel/accel.sh@20 -- # val=32 00:06:39.479 14:43:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.479 14:43:21 -- accel/accel.sh@19 -- # IFS=: 00:06:39.479 14:43:21 -- accel/accel.sh@19 -- # read -r var val 00:06:39.479 14:43:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.479 14:43:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.479 14:43:21 -- accel/accel.sh@19 -- # IFS=: 00:06:39.479 14:43:21 -- accel/accel.sh@19 -- # read -r var val 00:06:39.479 14:43:21 -- accel/accel.sh@20 -- # val= 00:06:39.479 14:43:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.479 14:43:21 -- accel/accel.sh@19 -- # IFS=: 00:06:39.479 14:43:21 -- accel/accel.sh@19 -- # read -r var val 00:06:39.479 14:43:21 -- accel/accel.sh@20 -- # val=software 00:06:39.479 14:43:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.479 14:43:21 -- accel/accel.sh@22 -- # accel_module=software 00:06:39.479 14:43:21 -- accel/accel.sh@19 -- # IFS=: 00:06:39.479 14:43:21 -- accel/accel.sh@19 -- # read -r var val 00:06:39.479 14:43:21 -- accel/accel.sh@20 -- # val=32 00:06:39.479 14:43:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.479 14:43:22 -- accel/accel.sh@19 -- # IFS=: 00:06:39.479 14:43:22 -- accel/accel.sh@19 -- # read -r var val 00:06:39.479 14:43:22 -- accel/accel.sh@20 -- # val=32 00:06:39.479 14:43:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.479 14:43:22 -- accel/accel.sh@19 -- # IFS=: 00:06:39.479 14:43:22 -- accel/accel.sh@19 -- # read -r var val 00:06:39.479 14:43:22 -- accel/accel.sh@20 -- # val=1 00:06:39.479 14:43:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.479 14:43:22 -- accel/accel.sh@19 -- # IFS=: 00:06:39.479 14:43:22 -- accel/accel.sh@19 -- # read -r var val 00:06:39.479 14:43:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.479 14:43:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.479 14:43:22 -- accel/accel.sh@19 -- # IFS=: 00:06:39.479 14:43:22 -- accel/accel.sh@19 -- # read -r var val 00:06:39.479 14:43:22 -- accel/accel.sh@20 -- # val=Yes 00:06:39.479 14:43:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.479 14:43:22 -- accel/accel.sh@19 -- # IFS=: 00:06:39.479 14:43:22 -- accel/accel.sh@19 -- # read -r var val 00:06:39.479 14:43:22 -- accel/accel.sh@20 -- # val= 00:06:39.479 14:43:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.479 14:43:22 -- accel/accel.sh@19 -- # IFS=: 00:06:39.479 14:43:22 -- accel/accel.sh@19 -- # read -r var val 00:06:39.479 14:43:22 -- accel/accel.sh@20 -- # val= 00:06:39.479 14:43:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.479 14:43:22 -- accel/accel.sh@19 -- # IFS=: 00:06:39.479 14:43:22 -- accel/accel.sh@19 -- # read -r var val 00:06:40.422 14:43:23 -- accel/accel.sh@20 -- # val= 00:06:40.422 14:43:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.422 14:43:23 -- accel/accel.sh@19 -- # IFS=: 00:06:40.422 14:43:23 -- accel/accel.sh@19 -- # read -r var val 00:06:40.422 14:43:23 -- accel/accel.sh@20 -- # val= 00:06:40.422 14:43:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.422 14:43:23 -- accel/accel.sh@19 -- # IFS=: 00:06:40.422 14:43:23 -- accel/accel.sh@19 -- # read -r var val 00:06:40.422 14:43:23 -- accel/accel.sh@20 -- # val= 00:06:40.422 14:43:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.422 14:43:23 -- accel/accel.sh@19 -- # IFS=: 00:06:40.422 14:43:23 -- accel/accel.sh@19 -- # read -r var val 00:06:40.422 14:43:23 -- accel/accel.sh@20 -- # val= 00:06:40.422 14:43:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.422 14:43:23 -- accel/accel.sh@19 -- # IFS=: 00:06:40.422 14:43:23 -- accel/accel.sh@19 -- # read -r var val 00:06:40.683 14:43:23 -- accel/accel.sh@20 -- # val= 00:06:40.683 14:43:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.683 14:43:23 -- accel/accel.sh@19 -- # IFS=: 00:06:40.683 14:43:23 -- accel/accel.sh@19 -- # read -r var val 00:06:40.683 14:43:23 -- accel/accel.sh@20 -- # val= 00:06:40.683 14:43:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.683 14:43:23 -- accel/accel.sh@19 -- # IFS=: 00:06:40.683 14:43:23 -- accel/accel.sh@19 -- # read -r var val 00:06:40.683 14:43:23 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.683 14:43:23 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:40.683 14:43:23 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.683 00:06:40.683 real 0m1.297s 00:06:40.683 user 0m1.199s 00:06:40.683 sys 0m0.109s 00:06:40.683 14:43:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:40.683 14:43:23 -- common/autotest_common.sh@10 -- # set +x 00:06:40.683 ************************************ 00:06:40.683 END TEST accel_crc32c 00:06:40.683 ************************************ 00:06:40.683 14:43:23 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:40.683 14:43:23 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:40.683 14:43:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.683 14:43:23 -- common/autotest_common.sh@10 -- # set +x 00:06:40.683 ************************************ 00:06:40.683 START TEST accel_crc32c_C2 00:06:40.683 ************************************ 00:06:40.683 14:43:23 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:40.683 14:43:23 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.683 14:43:23 -- accel/accel.sh@17 -- # local accel_module 00:06:40.683 14:43:23 -- accel/accel.sh@19 -- # IFS=: 00:06:40.683 14:43:23 -- accel/accel.sh@19 -- # read -r var val 00:06:40.683 14:43:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:40.683 14:43:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:40.683 14:43:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.683 14:43:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.683 14:43:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.683 14:43:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.683 14:43:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.683 14:43:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.683 14:43:23 -- accel/accel.sh@40 -- # local IFS=, 00:06:40.683 14:43:23 -- accel/accel.sh@41 -- # jq -r . 00:06:40.683 [2024-04-26 14:43:23.304316] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:40.683 [2024-04-26 14:43:23.304412] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid877199 ] 00:06:40.683 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.944 [2024-04-26 14:43:23.369384] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.944 [2024-04-26 14:43:23.440044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.944 14:43:23 -- accel/accel.sh@20 -- # val= 00:06:40.944 14:43:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.944 14:43:23 -- accel/accel.sh@19 -- # IFS=: 00:06:40.944 14:43:23 -- accel/accel.sh@19 -- # read -r var val 00:06:40.944 14:43:23 -- accel/accel.sh@20 -- # val= 00:06:40.944 14:43:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.944 14:43:23 -- accel/accel.sh@19 -- # IFS=: 00:06:40.944 14:43:23 -- accel/accel.sh@19 -- # read -r var val 00:06:40.944 14:43:23 -- accel/accel.sh@20 -- # val=0x1 00:06:40.944 14:43:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.944 14:43:23 -- accel/accel.sh@19 -- # IFS=: 00:06:40.944 14:43:23 -- accel/accel.sh@19 -- # read -r var val 00:06:40.944 14:43:23 -- accel/accel.sh@20 -- # val= 00:06:40.944 14:43:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.944 14:43:23 -- accel/accel.sh@19 -- # IFS=: 00:06:40.944 14:43:23 -- accel/accel.sh@19 -- # read -r var val 00:06:40.944 14:43:23 -- accel/accel.sh@20 -- # val= 00:06:40.944 14:43:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.944 14:43:23 -- accel/accel.sh@19 -- # IFS=: 00:06:40.944 14:43:23 -- accel/accel.sh@19 -- # read -r var val 00:06:40.944 14:43:23 -- accel/accel.sh@20 -- # val=crc32c 00:06:40.944 14:43:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.944 14:43:23 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:40.944 14:43:23 -- accel/accel.sh@19 -- # IFS=: 00:06:40.944 14:43:23 -- accel/accel.sh@19 -- # read -r var val 00:06:40.944 14:43:23 -- accel/accel.sh@20 -- # val=0 00:06:40.944 14:43:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.944 14:43:23 -- accel/accel.sh@19 -- # IFS=: 00:06:40.944 14:43:23 -- accel/accel.sh@19 -- # read -r var val 00:06:40.944 14:43:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.944 14:43:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.944 14:43:23 -- accel/accel.sh@19 -- # IFS=: 00:06:40.944 14:43:23 -- accel/accel.sh@19 -- # read -r var val 00:06:40.944 14:43:23 -- accel/accel.sh@20 -- # val= 00:06:40.944 14:43:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.944 14:43:23 -- accel/accel.sh@19 -- # IFS=: 00:06:40.944 14:43:23 -- accel/accel.sh@19 -- # read -r var val 00:06:40.944 14:43:23 -- accel/accel.sh@20 -- # val=software 00:06:40.944 14:43:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.944 14:43:23 -- accel/accel.sh@22 -- # accel_module=software 00:06:40.944 14:43:23 -- accel/accel.sh@19 -- # IFS=: 00:06:40.944 14:43:23 -- accel/accel.sh@19 -- # read -r var val 00:06:40.944 14:43:23 -- accel/accel.sh@20 -- # val=32 00:06:40.945 14:43:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.945 14:43:23 -- accel/accel.sh@19 -- # IFS=: 00:06:40.945 14:43:23 -- accel/accel.sh@19 -- # read -r var val 00:06:40.945 14:43:23 -- accel/accel.sh@20 -- # val=32 00:06:40.945 14:43:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.945 14:43:23 -- accel/accel.sh@19 -- # IFS=: 00:06:40.945 14:43:23 -- accel/accel.sh@19 -- # read -r var val 00:06:40.945 14:43:23 -- accel/accel.sh@20 -- # val=1 00:06:40.945 14:43:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.945 14:43:23 -- accel/accel.sh@19 -- # IFS=: 00:06:40.945 14:43:23 -- accel/accel.sh@19 -- # read -r var val 00:06:40.945 14:43:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.945 14:43:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.945 14:43:23 -- accel/accel.sh@19 -- # IFS=: 00:06:40.945 14:43:23 -- accel/accel.sh@19 -- # read -r var val 00:06:40.945 14:43:23 -- accel/accel.sh@20 -- # val=Yes 00:06:40.945 14:43:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.945 14:43:23 -- accel/accel.sh@19 -- # IFS=: 00:06:40.945 14:43:23 -- accel/accel.sh@19 -- # read -r var val 00:06:40.945 14:43:23 -- accel/accel.sh@20 -- # val= 00:06:40.945 14:43:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.945 14:43:23 -- accel/accel.sh@19 -- # IFS=: 00:06:40.945 14:43:23 -- accel/accel.sh@19 -- # read -r var val 00:06:40.945 14:43:23 -- accel/accel.sh@20 -- # val= 00:06:40.945 14:43:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.945 14:43:23 -- accel/accel.sh@19 -- # IFS=: 00:06:40.945 14:43:23 -- accel/accel.sh@19 -- # read -r var val 00:06:42.330 14:43:24 -- accel/accel.sh@20 -- # val= 00:06:42.330 14:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # IFS=: 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # read -r var val 00:06:42.330 14:43:24 -- accel/accel.sh@20 -- # val= 00:06:42.330 14:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # IFS=: 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # read -r var val 00:06:42.330 14:43:24 -- accel/accel.sh@20 -- # val= 00:06:42.330 14:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # IFS=: 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # read -r var val 00:06:42.330 14:43:24 -- accel/accel.sh@20 -- # val= 00:06:42.330 14:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # IFS=: 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # read -r var val 00:06:42.330 14:43:24 -- accel/accel.sh@20 -- # val= 00:06:42.330 14:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # IFS=: 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # read -r var val 00:06:42.330 14:43:24 -- accel/accel.sh@20 -- # val= 00:06:42.330 14:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # IFS=: 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # read -r var val 00:06:42.330 14:43:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.330 14:43:24 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:42.330 14:43:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.330 00:06:42.330 real 0m1.296s 00:06:42.330 user 0m1.195s 00:06:42.330 sys 0m0.111s 00:06:42.330 14:43:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:42.330 14:43:24 -- common/autotest_common.sh@10 -- # set +x 00:06:42.330 ************************************ 00:06:42.330 END TEST accel_crc32c_C2 00:06:42.330 ************************************ 00:06:42.330 14:43:24 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:42.330 14:43:24 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:42.330 14:43:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.330 14:43:24 -- common/autotest_common.sh@10 -- # set +x 00:06:42.330 ************************************ 00:06:42.330 START TEST accel_copy 00:06:42.330 ************************************ 00:06:42.330 14:43:24 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:06:42.330 14:43:24 -- accel/accel.sh@16 -- # local accel_opc 00:06:42.330 14:43:24 -- accel/accel.sh@17 -- # local accel_module 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # IFS=: 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # read -r var val 00:06:42.330 14:43:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:42.330 14:43:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:42.330 14:43:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.330 14:43:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.330 14:43:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.330 14:43:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.330 14:43:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.330 14:43:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.330 14:43:24 -- accel/accel.sh@40 -- # local IFS=, 00:06:42.330 14:43:24 -- accel/accel.sh@41 -- # jq -r . 00:06:42.330 [2024-04-26 14:43:24.784640] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:42.330 [2024-04-26 14:43:24.784715] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid877490 ] 00:06:42.330 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.330 [2024-04-26 14:43:24.850042] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.330 [2024-04-26 14:43:24.922304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.330 14:43:24 -- accel/accel.sh@20 -- # val= 00:06:42.330 14:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # IFS=: 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # read -r var val 00:06:42.330 14:43:24 -- accel/accel.sh@20 -- # val= 00:06:42.330 14:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # IFS=: 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # read -r var val 00:06:42.330 14:43:24 -- accel/accel.sh@20 -- # val=0x1 00:06:42.330 14:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # IFS=: 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # read -r var val 00:06:42.330 14:43:24 -- accel/accel.sh@20 -- # val= 00:06:42.330 14:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # IFS=: 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # read -r var val 00:06:42.330 14:43:24 -- accel/accel.sh@20 -- # val= 00:06:42.330 14:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # IFS=: 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # read -r var val 00:06:42.330 14:43:24 -- accel/accel.sh@20 -- # val=copy 00:06:42.330 14:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.330 14:43:24 -- accel/accel.sh@23 -- # accel_opc=copy 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # IFS=: 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # read -r var val 00:06:42.330 14:43:24 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.330 14:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # IFS=: 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # read -r var val 00:06:42.330 14:43:24 -- accel/accel.sh@20 -- # val= 00:06:42.330 14:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # IFS=: 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # read -r var val 00:06:42.330 14:43:24 -- accel/accel.sh@20 -- # val=software 00:06:42.330 14:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.330 14:43:24 -- accel/accel.sh@22 -- # accel_module=software 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # IFS=: 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # read -r var val 00:06:42.330 14:43:24 -- accel/accel.sh@20 -- # val=32 00:06:42.330 14:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # IFS=: 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # read -r var val 00:06:42.330 14:43:24 -- accel/accel.sh@20 -- # val=32 00:06:42.330 14:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # IFS=: 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # read -r var val 00:06:42.330 14:43:24 -- accel/accel.sh@20 -- # val=1 00:06:42.330 14:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # IFS=: 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # read -r var val 00:06:42.330 14:43:24 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.330 14:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # IFS=: 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # read -r var val 00:06:42.330 14:43:24 -- accel/accel.sh@20 -- # val=Yes 00:06:42.330 14:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # IFS=: 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # read -r var val 00:06:42.330 14:43:24 -- accel/accel.sh@20 -- # val= 00:06:42.330 14:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # IFS=: 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # read -r var val 00:06:42.330 14:43:24 -- accel/accel.sh@20 -- # val= 00:06:42.330 14:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # IFS=: 00:06:42.330 14:43:24 -- accel/accel.sh@19 -- # read -r var val 00:06:43.715 14:43:26 -- accel/accel.sh@20 -- # val= 00:06:43.715 14:43:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.715 14:43:26 -- accel/accel.sh@19 -- # IFS=: 00:06:43.715 14:43:26 -- accel/accel.sh@19 -- # read -r var val 00:06:43.715 14:43:26 -- accel/accel.sh@20 -- # val= 00:06:43.715 14:43:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.715 14:43:26 -- accel/accel.sh@19 -- # IFS=: 00:06:43.715 14:43:26 -- accel/accel.sh@19 -- # read -r var val 00:06:43.715 14:43:26 -- accel/accel.sh@20 -- # val= 00:06:43.715 14:43:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.715 14:43:26 -- accel/accel.sh@19 -- # IFS=: 00:06:43.716 14:43:26 -- accel/accel.sh@19 -- # read -r var val 00:06:43.716 14:43:26 -- accel/accel.sh@20 -- # val= 00:06:43.716 14:43:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.716 14:43:26 -- accel/accel.sh@19 -- # IFS=: 00:06:43.716 14:43:26 -- accel/accel.sh@19 -- # read -r var val 00:06:43.716 14:43:26 -- accel/accel.sh@20 -- # val= 00:06:43.716 14:43:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.716 14:43:26 -- accel/accel.sh@19 -- # IFS=: 00:06:43.716 14:43:26 -- accel/accel.sh@19 -- # read -r var val 00:06:43.716 14:43:26 -- accel/accel.sh@20 -- # val= 00:06:43.716 14:43:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.716 14:43:26 -- accel/accel.sh@19 -- # IFS=: 00:06:43.716 14:43:26 -- accel/accel.sh@19 -- # read -r var val 00:06:43.716 14:43:26 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.716 14:43:26 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:43.716 14:43:26 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.716 00:06:43.716 real 0m1.295s 00:06:43.716 user 0m1.195s 00:06:43.716 sys 0m0.110s 00:06:43.716 14:43:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:43.716 14:43:26 -- common/autotest_common.sh@10 -- # set +x 00:06:43.716 ************************************ 00:06:43.716 END TEST accel_copy 00:06:43.716 ************************************ 00:06:43.716 14:43:26 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:43.716 14:43:26 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:43.716 14:43:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.716 14:43:26 -- common/autotest_common.sh@10 -- # set +x 00:06:43.716 ************************************ 00:06:43.716 START TEST accel_fill 00:06:43.716 ************************************ 00:06:43.716 14:43:26 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:43.716 14:43:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.716 14:43:26 -- accel/accel.sh@17 -- # local accel_module 00:06:43.716 14:43:26 -- accel/accel.sh@19 -- # IFS=: 00:06:43.716 14:43:26 -- accel/accel.sh@19 -- # read -r var val 00:06:43.716 14:43:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:43.716 14:43:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:43.716 14:43:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.716 14:43:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.716 14:43:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.716 14:43:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.716 14:43:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.716 14:43:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.716 14:43:26 -- accel/accel.sh@40 -- # local IFS=, 00:06:43.716 14:43:26 -- accel/accel.sh@41 -- # jq -r . 00:06:43.716 [2024-04-26 14:43:26.239128] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:43.716 [2024-04-26 14:43:26.239221] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid877729 ] 00:06:43.716 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.716 [2024-04-26 14:43:26.304336] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.716 [2024-04-26 14:43:26.377040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.977 14:43:26 -- accel/accel.sh@20 -- # val= 00:06:43.977 14:43:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # IFS=: 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # read -r var val 00:06:43.977 14:43:26 -- accel/accel.sh@20 -- # val= 00:06:43.977 14:43:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # IFS=: 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # read -r var val 00:06:43.977 14:43:26 -- accel/accel.sh@20 -- # val=0x1 00:06:43.977 14:43:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # IFS=: 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # read -r var val 00:06:43.977 14:43:26 -- accel/accel.sh@20 -- # val= 00:06:43.977 14:43:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # IFS=: 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # read -r var val 00:06:43.977 14:43:26 -- accel/accel.sh@20 -- # val= 00:06:43.977 14:43:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # IFS=: 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # read -r var val 00:06:43.977 14:43:26 -- accel/accel.sh@20 -- # val=fill 00:06:43.977 14:43:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.977 14:43:26 -- accel/accel.sh@23 -- # accel_opc=fill 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # IFS=: 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # read -r var val 00:06:43.977 14:43:26 -- accel/accel.sh@20 -- # val=0x80 00:06:43.977 14:43:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # IFS=: 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # read -r var val 00:06:43.977 14:43:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.977 14:43:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # IFS=: 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # read -r var val 00:06:43.977 14:43:26 -- accel/accel.sh@20 -- # val= 00:06:43.977 14:43:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # IFS=: 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # read -r var val 00:06:43.977 14:43:26 -- accel/accel.sh@20 -- # val=software 00:06:43.977 14:43:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.977 14:43:26 -- accel/accel.sh@22 -- # accel_module=software 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # IFS=: 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # read -r var val 00:06:43.977 14:43:26 -- accel/accel.sh@20 -- # val=64 00:06:43.977 14:43:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # IFS=: 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # read -r var val 00:06:43.977 14:43:26 -- accel/accel.sh@20 -- # val=64 00:06:43.977 14:43:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # IFS=: 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # read -r var val 00:06:43.977 14:43:26 -- accel/accel.sh@20 -- # val=1 00:06:43.977 14:43:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # IFS=: 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # read -r var val 00:06:43.977 14:43:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.977 14:43:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # IFS=: 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # read -r var val 00:06:43.977 14:43:26 -- accel/accel.sh@20 -- # val=Yes 00:06:43.977 14:43:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # IFS=: 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # read -r var val 00:06:43.977 14:43:26 -- accel/accel.sh@20 -- # val= 00:06:43.977 14:43:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # IFS=: 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # read -r var val 00:06:43.977 14:43:26 -- accel/accel.sh@20 -- # val= 00:06:43.977 14:43:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # IFS=: 00:06:43.977 14:43:26 -- accel/accel.sh@19 -- # read -r var val 00:06:44.921 14:43:27 -- accel/accel.sh@20 -- # val= 00:06:44.921 14:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.921 14:43:27 -- accel/accel.sh@19 -- # IFS=: 00:06:44.921 14:43:27 -- accel/accel.sh@19 -- # read -r var val 00:06:44.921 14:43:27 -- accel/accel.sh@20 -- # val= 00:06:44.921 14:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.921 14:43:27 -- accel/accel.sh@19 -- # IFS=: 00:06:44.921 14:43:27 -- accel/accel.sh@19 -- # read -r var val 00:06:44.921 14:43:27 -- accel/accel.sh@20 -- # val= 00:06:44.921 14:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.921 14:43:27 -- accel/accel.sh@19 -- # IFS=: 00:06:44.921 14:43:27 -- accel/accel.sh@19 -- # read -r var val 00:06:44.921 14:43:27 -- accel/accel.sh@20 -- # val= 00:06:44.921 14:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.921 14:43:27 -- accel/accel.sh@19 -- # IFS=: 00:06:44.921 14:43:27 -- accel/accel.sh@19 -- # read -r var val 00:06:44.921 14:43:27 -- accel/accel.sh@20 -- # val= 00:06:44.921 14:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.921 14:43:27 -- accel/accel.sh@19 -- # IFS=: 00:06:44.921 14:43:27 -- accel/accel.sh@19 -- # read -r var val 00:06:44.921 14:43:27 -- accel/accel.sh@20 -- # val= 00:06:44.921 14:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.921 14:43:27 -- accel/accel.sh@19 -- # IFS=: 00:06:44.921 14:43:27 -- accel/accel.sh@19 -- # read -r var val 00:06:44.921 14:43:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.921 14:43:27 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:44.921 14:43:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.921 00:06:44.921 real 0m1.295s 00:06:44.921 user 0m1.206s 00:06:44.921 sys 0m0.101s 00:06:44.921 14:43:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:44.921 14:43:27 -- common/autotest_common.sh@10 -- # set +x 00:06:44.921 ************************************ 00:06:44.921 END TEST accel_fill 00:06:44.921 ************************************ 00:06:44.921 14:43:27 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:44.921 14:43:27 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:44.921 14:43:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.921 14:43:27 -- common/autotest_common.sh@10 -- # set +x 00:06:45.182 ************************************ 00:06:45.182 START TEST accel_copy_crc32c 00:06:45.182 ************************************ 00:06:45.182 14:43:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:06:45.182 14:43:27 -- accel/accel.sh@16 -- # local accel_opc 00:06:45.182 14:43:27 -- accel/accel.sh@17 -- # local accel_module 00:06:45.182 14:43:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:45.182 14:43:27 -- accel/accel.sh@19 -- # IFS=: 00:06:45.182 14:43:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:45.182 14:43:27 -- accel/accel.sh@19 -- # read -r var val 00:06:45.182 14:43:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.182 14:43:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.182 14:43:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.182 14:43:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.182 14:43:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.182 14:43:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.182 14:43:27 -- accel/accel.sh@40 -- # local IFS=, 00:06:45.182 14:43:27 -- accel/accel.sh@41 -- # jq -r . 00:06:45.182 [2024-04-26 14:43:27.661377] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:45.182 [2024-04-26 14:43:27.661412] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid877967 ] 00:06:45.182 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.182 [2024-04-26 14:43:27.713931] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.182 [2024-04-26 14:43:27.777742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.182 14:43:27 -- accel/accel.sh@20 -- # val= 00:06:45.182 14:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.182 14:43:27 -- accel/accel.sh@19 -- # IFS=: 00:06:45.182 14:43:27 -- accel/accel.sh@19 -- # read -r var val 00:06:45.182 14:43:27 -- accel/accel.sh@20 -- # val= 00:06:45.182 14:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.182 14:43:27 -- accel/accel.sh@19 -- # IFS=: 00:06:45.182 14:43:27 -- accel/accel.sh@19 -- # read -r var val 00:06:45.182 14:43:27 -- accel/accel.sh@20 -- # val=0x1 00:06:45.182 14:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.182 14:43:27 -- accel/accel.sh@19 -- # IFS=: 00:06:45.182 14:43:27 -- accel/accel.sh@19 -- # read -r var val 00:06:45.183 14:43:27 -- accel/accel.sh@20 -- # val= 00:06:45.183 14:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # IFS=: 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # read -r var val 00:06:45.183 14:43:27 -- accel/accel.sh@20 -- # val= 00:06:45.183 14:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # IFS=: 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # read -r var val 00:06:45.183 14:43:27 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:45.183 14:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.183 14:43:27 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # IFS=: 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # read -r var val 00:06:45.183 14:43:27 -- accel/accel.sh@20 -- # val=0 00:06:45.183 14:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # IFS=: 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # read -r var val 00:06:45.183 14:43:27 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.183 14:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # IFS=: 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # read -r var val 00:06:45.183 14:43:27 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.183 14:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # IFS=: 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # read -r var val 00:06:45.183 14:43:27 -- accel/accel.sh@20 -- # val= 00:06:45.183 14:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # IFS=: 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # read -r var val 00:06:45.183 14:43:27 -- accel/accel.sh@20 -- # val=software 00:06:45.183 14:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.183 14:43:27 -- accel/accel.sh@22 -- # accel_module=software 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # IFS=: 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # read -r var val 00:06:45.183 14:43:27 -- accel/accel.sh@20 -- # val=32 00:06:45.183 14:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # IFS=: 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # read -r var val 00:06:45.183 14:43:27 -- accel/accel.sh@20 -- # val=32 00:06:45.183 14:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # IFS=: 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # read -r var val 00:06:45.183 14:43:27 -- accel/accel.sh@20 -- # val=1 00:06:45.183 14:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # IFS=: 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # read -r var val 00:06:45.183 14:43:27 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.183 14:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # IFS=: 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # read -r var val 00:06:45.183 14:43:27 -- accel/accel.sh@20 -- # val=Yes 00:06:45.183 14:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # IFS=: 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # read -r var val 00:06:45.183 14:43:27 -- accel/accel.sh@20 -- # val= 00:06:45.183 14:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # IFS=: 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # read -r var val 00:06:45.183 14:43:27 -- accel/accel.sh@20 -- # val= 00:06:45.183 14:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # IFS=: 00:06:45.183 14:43:27 -- accel/accel.sh@19 -- # read -r var val 00:06:46.570 14:43:28 -- accel/accel.sh@20 -- # val= 00:06:46.570 14:43:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.570 14:43:28 -- accel/accel.sh@19 -- # IFS=: 00:06:46.570 14:43:28 -- accel/accel.sh@19 -- # read -r var val 00:06:46.570 14:43:28 -- accel/accel.sh@20 -- # val= 00:06:46.570 14:43:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.570 14:43:28 -- accel/accel.sh@19 -- # IFS=: 00:06:46.570 14:43:28 -- accel/accel.sh@19 -- # read -r var val 00:06:46.570 14:43:28 -- accel/accel.sh@20 -- # val= 00:06:46.570 14:43:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.570 14:43:28 -- accel/accel.sh@19 -- # IFS=: 00:06:46.570 14:43:28 -- accel/accel.sh@19 -- # read -r var val 00:06:46.570 14:43:28 -- accel/accel.sh@20 -- # val= 00:06:46.570 14:43:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.570 14:43:28 -- accel/accel.sh@19 -- # IFS=: 00:06:46.570 14:43:28 -- accel/accel.sh@19 -- # read -r var val 00:06:46.570 14:43:28 -- accel/accel.sh@20 -- # val= 00:06:46.570 14:43:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.570 14:43:28 -- accel/accel.sh@19 -- # IFS=: 00:06:46.570 14:43:28 -- accel/accel.sh@19 -- # read -r var val 00:06:46.570 14:43:28 -- accel/accel.sh@20 -- # val= 00:06:46.570 14:43:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.570 14:43:28 -- accel/accel.sh@19 -- # IFS=: 00:06:46.570 14:43:28 -- accel/accel.sh@19 -- # read -r var val 00:06:46.570 14:43:28 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.570 14:43:28 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:46.570 14:43:28 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.570 00:06:46.570 real 0m1.257s 00:06:46.570 user 0m1.182s 00:06:46.570 sys 0m0.086s 00:06:46.570 14:43:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:46.570 14:43:28 -- common/autotest_common.sh@10 -- # set +x 00:06:46.570 ************************************ 00:06:46.570 END TEST accel_copy_crc32c 00:06:46.570 ************************************ 00:06:46.570 14:43:28 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:46.570 14:43:28 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:46.570 14:43:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.570 14:43:28 -- common/autotest_common.sh@10 -- # set +x 00:06:46.570 ************************************ 00:06:46.570 START TEST accel_copy_crc32c_C2 00:06:46.570 ************************************ 00:06:46.570 14:43:29 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:46.570 14:43:29 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.570 14:43:29 -- accel/accel.sh@17 -- # local accel_module 00:06:46.570 14:43:29 -- accel/accel.sh@19 -- # IFS=: 00:06:46.570 14:43:29 -- accel/accel.sh@19 -- # read -r var val 00:06:46.570 14:43:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:46.570 14:43:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:46.570 14:43:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.570 14:43:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.570 14:43:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.570 14:43:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.570 14:43:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.570 14:43:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.570 14:43:29 -- accel/accel.sh@40 -- # local IFS=, 00:06:46.570 14:43:29 -- accel/accel.sh@41 -- # jq -r . 00:06:46.570 [2024-04-26 14:43:29.076938] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:46.570 [2024-04-26 14:43:29.077012] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid878320 ] 00:06:46.570 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.570 [2024-04-26 14:43:29.140296] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.570 [2024-04-26 14:43:29.208110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.835 14:43:29 -- accel/accel.sh@20 -- # val= 00:06:46.835 14:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # IFS=: 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # read -r var val 00:06:46.835 14:43:29 -- accel/accel.sh@20 -- # val= 00:06:46.835 14:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # IFS=: 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # read -r var val 00:06:46.835 14:43:29 -- accel/accel.sh@20 -- # val=0x1 00:06:46.835 14:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # IFS=: 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # read -r var val 00:06:46.835 14:43:29 -- accel/accel.sh@20 -- # val= 00:06:46.835 14:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # IFS=: 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # read -r var val 00:06:46.835 14:43:29 -- accel/accel.sh@20 -- # val= 00:06:46.835 14:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # IFS=: 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # read -r var val 00:06:46.835 14:43:29 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:46.835 14:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.835 14:43:29 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # IFS=: 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # read -r var val 00:06:46.835 14:43:29 -- accel/accel.sh@20 -- # val=0 00:06:46.835 14:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # IFS=: 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # read -r var val 00:06:46.835 14:43:29 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.835 14:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # IFS=: 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # read -r var val 00:06:46.835 14:43:29 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:46.835 14:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # IFS=: 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # read -r var val 00:06:46.835 14:43:29 -- accel/accel.sh@20 -- # val= 00:06:46.835 14:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # IFS=: 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # read -r var val 00:06:46.835 14:43:29 -- accel/accel.sh@20 -- # val=software 00:06:46.835 14:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.835 14:43:29 -- accel/accel.sh@22 -- # accel_module=software 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # IFS=: 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # read -r var val 00:06:46.835 14:43:29 -- accel/accel.sh@20 -- # val=32 00:06:46.835 14:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # IFS=: 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # read -r var val 00:06:46.835 14:43:29 -- accel/accel.sh@20 -- # val=32 00:06:46.835 14:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # IFS=: 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # read -r var val 00:06:46.835 14:43:29 -- accel/accel.sh@20 -- # val=1 00:06:46.835 14:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # IFS=: 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # read -r var val 00:06:46.835 14:43:29 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.835 14:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # IFS=: 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # read -r var val 00:06:46.835 14:43:29 -- accel/accel.sh@20 -- # val=Yes 00:06:46.835 14:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # IFS=: 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # read -r var val 00:06:46.835 14:43:29 -- accel/accel.sh@20 -- # val= 00:06:46.835 14:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # IFS=: 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # read -r var val 00:06:46.835 14:43:29 -- accel/accel.sh@20 -- # val= 00:06:46.835 14:43:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # IFS=: 00:06:46.835 14:43:29 -- accel/accel.sh@19 -- # read -r var val 00:06:47.920 14:43:30 -- accel/accel.sh@20 -- # val= 00:06:47.920 14:43:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.920 14:43:30 -- accel/accel.sh@19 -- # IFS=: 00:06:47.920 14:43:30 -- accel/accel.sh@19 -- # read -r var val 00:06:47.920 14:43:30 -- accel/accel.sh@20 -- # val= 00:06:47.920 14:43:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.920 14:43:30 -- accel/accel.sh@19 -- # IFS=: 00:06:47.920 14:43:30 -- accel/accel.sh@19 -- # read -r var val 00:06:47.920 14:43:30 -- accel/accel.sh@20 -- # val= 00:06:47.920 14:43:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.920 14:43:30 -- accel/accel.sh@19 -- # IFS=: 00:06:47.920 14:43:30 -- accel/accel.sh@19 -- # read -r var val 00:06:47.920 14:43:30 -- accel/accel.sh@20 -- # val= 00:06:47.920 14:43:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.920 14:43:30 -- accel/accel.sh@19 -- # IFS=: 00:06:47.920 14:43:30 -- accel/accel.sh@19 -- # read -r var val 00:06:47.920 14:43:30 -- accel/accel.sh@20 -- # val= 00:06:47.920 14:43:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.920 14:43:30 -- accel/accel.sh@19 -- # IFS=: 00:06:47.920 14:43:30 -- accel/accel.sh@19 -- # read -r var val 00:06:47.920 14:43:30 -- accel/accel.sh@20 -- # val= 00:06:47.920 14:43:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.920 14:43:30 -- accel/accel.sh@19 -- # IFS=: 00:06:47.920 14:43:30 -- accel/accel.sh@19 -- # read -r var val 00:06:47.920 14:43:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.920 14:43:30 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:47.920 14:43:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.920 00:06:47.920 real 0m1.288s 00:06:47.920 user 0m1.207s 00:06:47.920 sys 0m0.092s 00:06:47.920 14:43:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:47.920 14:43:30 -- common/autotest_common.sh@10 -- # set +x 00:06:47.920 ************************************ 00:06:47.920 END TEST accel_copy_crc32c_C2 00:06:47.920 ************************************ 00:06:47.920 14:43:30 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:47.920 14:43:30 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:47.920 14:43:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.920 14:43:30 -- common/autotest_common.sh@10 -- # set +x 00:06:47.920 ************************************ 00:06:47.920 START TEST accel_dualcast 00:06:47.920 ************************************ 00:06:47.920 14:43:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:06:47.920 14:43:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.920 14:43:30 -- accel/accel.sh@17 -- # local accel_module 00:06:47.920 14:43:30 -- accel/accel.sh@19 -- # IFS=: 00:06:47.920 14:43:30 -- accel/accel.sh@19 -- # read -r var val 00:06:47.920 14:43:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:47.920 14:43:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:47.920 14:43:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.920 14:43:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.920 14:43:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.920 14:43:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.920 14:43:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.920 14:43:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.920 14:43:30 -- accel/accel.sh@40 -- # local IFS=, 00:06:47.920 14:43:30 -- accel/accel.sh@41 -- # jq -r . 00:06:47.920 [2024-04-26 14:43:30.533442] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:47.920 [2024-04-26 14:43:30.533513] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid878683 ] 00:06:47.920 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.182 [2024-04-26 14:43:30.598414] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.182 [2024-04-26 14:43:30.669347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.182 14:43:30 -- accel/accel.sh@20 -- # val= 00:06:48.182 14:43:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # IFS=: 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # read -r var val 00:06:48.182 14:43:30 -- accel/accel.sh@20 -- # val= 00:06:48.182 14:43:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # IFS=: 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # read -r var val 00:06:48.182 14:43:30 -- accel/accel.sh@20 -- # val=0x1 00:06:48.182 14:43:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # IFS=: 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # read -r var val 00:06:48.182 14:43:30 -- accel/accel.sh@20 -- # val= 00:06:48.182 14:43:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # IFS=: 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # read -r var val 00:06:48.182 14:43:30 -- accel/accel.sh@20 -- # val= 00:06:48.182 14:43:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # IFS=: 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # read -r var val 00:06:48.182 14:43:30 -- accel/accel.sh@20 -- # val=dualcast 00:06:48.182 14:43:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.182 14:43:30 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # IFS=: 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # read -r var val 00:06:48.182 14:43:30 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.182 14:43:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # IFS=: 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # read -r var val 00:06:48.182 14:43:30 -- accel/accel.sh@20 -- # val= 00:06:48.182 14:43:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # IFS=: 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # read -r var val 00:06:48.182 14:43:30 -- accel/accel.sh@20 -- # val=software 00:06:48.182 14:43:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.182 14:43:30 -- accel/accel.sh@22 -- # accel_module=software 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # IFS=: 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # read -r var val 00:06:48.182 14:43:30 -- accel/accel.sh@20 -- # val=32 00:06:48.182 14:43:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # IFS=: 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # read -r var val 00:06:48.182 14:43:30 -- accel/accel.sh@20 -- # val=32 00:06:48.182 14:43:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # IFS=: 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # read -r var val 00:06:48.182 14:43:30 -- accel/accel.sh@20 -- # val=1 00:06:48.182 14:43:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # IFS=: 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # read -r var val 00:06:48.182 14:43:30 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.182 14:43:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # IFS=: 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # read -r var val 00:06:48.182 14:43:30 -- accel/accel.sh@20 -- # val=Yes 00:06:48.182 14:43:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # IFS=: 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # read -r var val 00:06:48.182 14:43:30 -- accel/accel.sh@20 -- # val= 00:06:48.182 14:43:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # IFS=: 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # read -r var val 00:06:48.182 14:43:30 -- accel/accel.sh@20 -- # val= 00:06:48.182 14:43:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # IFS=: 00:06:48.182 14:43:30 -- accel/accel.sh@19 -- # read -r var val 00:06:49.568 14:43:31 -- accel/accel.sh@20 -- # val= 00:06:49.568 14:43:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.568 14:43:31 -- accel/accel.sh@19 -- # IFS=: 00:06:49.568 14:43:31 -- accel/accel.sh@19 -- # read -r var val 00:06:49.568 14:43:31 -- accel/accel.sh@20 -- # val= 00:06:49.568 14:43:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.568 14:43:31 -- accel/accel.sh@19 -- # IFS=: 00:06:49.568 14:43:31 -- accel/accel.sh@19 -- # read -r var val 00:06:49.568 14:43:31 -- accel/accel.sh@20 -- # val= 00:06:49.568 14:43:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.568 14:43:31 -- accel/accel.sh@19 -- # IFS=: 00:06:49.568 14:43:31 -- accel/accel.sh@19 -- # read -r var val 00:06:49.568 14:43:31 -- accel/accel.sh@20 -- # val= 00:06:49.568 14:43:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.568 14:43:31 -- accel/accel.sh@19 -- # IFS=: 00:06:49.568 14:43:31 -- accel/accel.sh@19 -- # read -r var val 00:06:49.568 14:43:31 -- accel/accel.sh@20 -- # val= 00:06:49.568 14:43:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.568 14:43:31 -- accel/accel.sh@19 -- # IFS=: 00:06:49.568 14:43:31 -- accel/accel.sh@19 -- # read -r var val 00:06:49.568 14:43:31 -- accel/accel.sh@20 -- # val= 00:06:49.568 14:43:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.568 14:43:31 -- accel/accel.sh@19 -- # IFS=: 00:06:49.569 14:43:31 -- accel/accel.sh@19 -- # read -r var val 00:06:49.569 14:43:31 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.569 14:43:31 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:49.569 14:43:31 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.569 00:06:49.569 real 0m1.295s 00:06:49.569 user 0m1.195s 00:06:49.569 sys 0m0.109s 00:06:49.569 14:43:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:49.569 14:43:31 -- common/autotest_common.sh@10 -- # set +x 00:06:49.569 ************************************ 00:06:49.569 END TEST accel_dualcast 00:06:49.569 ************************************ 00:06:49.569 14:43:31 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:49.569 14:43:31 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:49.569 14:43:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.569 14:43:31 -- common/autotest_common.sh@10 -- # set +x 00:06:49.569 ************************************ 00:06:49.569 START TEST accel_compare 00:06:49.569 ************************************ 00:06:49.569 14:43:31 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:06:49.569 14:43:31 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.569 14:43:31 -- accel/accel.sh@17 -- # local accel_module 00:06:49.569 14:43:31 -- accel/accel.sh@19 -- # IFS=: 00:06:49.569 14:43:31 -- accel/accel.sh@19 -- # read -r var val 00:06:49.569 14:43:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:49.569 14:43:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:49.569 14:43:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.569 14:43:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.569 14:43:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.569 14:43:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.569 14:43:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.569 14:43:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.569 14:43:31 -- accel/accel.sh@40 -- # local IFS=, 00:06:49.569 14:43:31 -- accel/accel.sh@41 -- # jq -r . 00:06:49.569 [2024-04-26 14:43:32.011835] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:49.569 [2024-04-26 14:43:32.011926] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid879045 ] 00:06:49.569 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.569 [2024-04-26 14:43:32.077824] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.569 [2024-04-26 14:43:32.148802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.569 14:43:32 -- accel/accel.sh@20 -- # val= 00:06:49.569 14:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # IFS=: 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # read -r var val 00:06:49.569 14:43:32 -- accel/accel.sh@20 -- # val= 00:06:49.569 14:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # IFS=: 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # read -r var val 00:06:49.569 14:43:32 -- accel/accel.sh@20 -- # val=0x1 00:06:49.569 14:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # IFS=: 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # read -r var val 00:06:49.569 14:43:32 -- accel/accel.sh@20 -- # val= 00:06:49.569 14:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # IFS=: 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # read -r var val 00:06:49.569 14:43:32 -- accel/accel.sh@20 -- # val= 00:06:49.569 14:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # IFS=: 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # read -r var val 00:06:49.569 14:43:32 -- accel/accel.sh@20 -- # val=compare 00:06:49.569 14:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.569 14:43:32 -- accel/accel.sh@23 -- # accel_opc=compare 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # IFS=: 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # read -r var val 00:06:49.569 14:43:32 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.569 14:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # IFS=: 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # read -r var val 00:06:49.569 14:43:32 -- accel/accel.sh@20 -- # val= 00:06:49.569 14:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # IFS=: 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # read -r var val 00:06:49.569 14:43:32 -- accel/accel.sh@20 -- # val=software 00:06:49.569 14:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.569 14:43:32 -- accel/accel.sh@22 -- # accel_module=software 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # IFS=: 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # read -r var val 00:06:49.569 14:43:32 -- accel/accel.sh@20 -- # val=32 00:06:49.569 14:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # IFS=: 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # read -r var val 00:06:49.569 14:43:32 -- accel/accel.sh@20 -- # val=32 00:06:49.569 14:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # IFS=: 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # read -r var val 00:06:49.569 14:43:32 -- accel/accel.sh@20 -- # val=1 00:06:49.569 14:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # IFS=: 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # read -r var val 00:06:49.569 14:43:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.569 14:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # IFS=: 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # read -r var val 00:06:49.569 14:43:32 -- accel/accel.sh@20 -- # val=Yes 00:06:49.569 14:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # IFS=: 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # read -r var val 00:06:49.569 14:43:32 -- accel/accel.sh@20 -- # val= 00:06:49.569 14:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # IFS=: 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # read -r var val 00:06:49.569 14:43:32 -- accel/accel.sh@20 -- # val= 00:06:49.569 14:43:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # IFS=: 00:06:49.569 14:43:32 -- accel/accel.sh@19 -- # read -r var val 00:06:50.954 14:43:33 -- accel/accel.sh@20 -- # val= 00:06:50.954 14:43:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.954 14:43:33 -- accel/accel.sh@19 -- # IFS=: 00:06:50.954 14:43:33 -- accel/accel.sh@19 -- # read -r var val 00:06:50.954 14:43:33 -- accel/accel.sh@20 -- # val= 00:06:50.954 14:43:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.954 14:43:33 -- accel/accel.sh@19 -- # IFS=: 00:06:50.954 14:43:33 -- accel/accel.sh@19 -- # read -r var val 00:06:50.954 14:43:33 -- accel/accel.sh@20 -- # val= 00:06:50.954 14:43:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.954 14:43:33 -- accel/accel.sh@19 -- # IFS=: 00:06:50.954 14:43:33 -- accel/accel.sh@19 -- # read -r var val 00:06:50.954 14:43:33 -- accel/accel.sh@20 -- # val= 00:06:50.954 14:43:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.954 14:43:33 -- accel/accel.sh@19 -- # IFS=: 00:06:50.954 14:43:33 -- accel/accel.sh@19 -- # read -r var val 00:06:50.954 14:43:33 -- accel/accel.sh@20 -- # val= 00:06:50.954 14:43:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.954 14:43:33 -- accel/accel.sh@19 -- # IFS=: 00:06:50.954 14:43:33 -- accel/accel.sh@19 -- # read -r var val 00:06:50.954 14:43:33 -- accel/accel.sh@20 -- # val= 00:06:50.954 14:43:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.954 14:43:33 -- accel/accel.sh@19 -- # IFS=: 00:06:50.954 14:43:33 -- accel/accel.sh@19 -- # read -r var val 00:06:50.954 14:43:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.954 14:43:33 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:50.954 14:43:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.954 00:06:50.954 real 0m1.296s 00:06:50.954 user 0m1.206s 00:06:50.954 sys 0m0.100s 00:06:50.954 14:43:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:50.954 14:43:33 -- common/autotest_common.sh@10 -- # set +x 00:06:50.954 ************************************ 00:06:50.954 END TEST accel_compare 00:06:50.954 ************************************ 00:06:50.954 14:43:33 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:50.954 14:43:33 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:50.954 14:43:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.954 14:43:33 -- common/autotest_common.sh@10 -- # set +x 00:06:50.954 ************************************ 00:06:50.954 START TEST accel_xor 00:06:50.954 ************************************ 00:06:50.954 14:43:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:06:50.954 14:43:33 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.954 14:43:33 -- accel/accel.sh@17 -- # local accel_module 00:06:50.954 14:43:33 -- accel/accel.sh@19 -- # IFS=: 00:06:50.954 14:43:33 -- accel/accel.sh@19 -- # read -r var val 00:06:50.954 14:43:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:50.954 14:43:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:50.954 14:43:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.954 14:43:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.954 14:43:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.954 14:43:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.954 14:43:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.954 14:43:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.954 14:43:33 -- accel/accel.sh@40 -- # local IFS=, 00:06:50.954 14:43:33 -- accel/accel.sh@41 -- # jq -r . 00:06:50.954 [2024-04-26 14:43:33.457134] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:50.954 [2024-04-26 14:43:33.457203] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid879399 ] 00:06:50.954 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.954 [2024-04-26 14:43:33.521438] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.954 [2024-04-26 14:43:33.590745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.244 14:43:33 -- accel/accel.sh@20 -- # val= 00:06:51.244 14:43:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # IFS=: 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # read -r var val 00:06:51.244 14:43:33 -- accel/accel.sh@20 -- # val= 00:06:51.244 14:43:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # IFS=: 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # read -r var val 00:06:51.244 14:43:33 -- accel/accel.sh@20 -- # val=0x1 00:06:51.244 14:43:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # IFS=: 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # read -r var val 00:06:51.244 14:43:33 -- accel/accel.sh@20 -- # val= 00:06:51.244 14:43:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # IFS=: 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # read -r var val 00:06:51.244 14:43:33 -- accel/accel.sh@20 -- # val= 00:06:51.244 14:43:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # IFS=: 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # read -r var val 00:06:51.244 14:43:33 -- accel/accel.sh@20 -- # val=xor 00:06:51.244 14:43:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.244 14:43:33 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # IFS=: 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # read -r var val 00:06:51.244 14:43:33 -- accel/accel.sh@20 -- # val=2 00:06:51.244 14:43:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # IFS=: 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # read -r var val 00:06:51.244 14:43:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.244 14:43:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # IFS=: 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # read -r var val 00:06:51.244 14:43:33 -- accel/accel.sh@20 -- # val= 00:06:51.244 14:43:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # IFS=: 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # read -r var val 00:06:51.244 14:43:33 -- accel/accel.sh@20 -- # val=software 00:06:51.244 14:43:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.244 14:43:33 -- accel/accel.sh@22 -- # accel_module=software 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # IFS=: 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # read -r var val 00:06:51.244 14:43:33 -- accel/accel.sh@20 -- # val=32 00:06:51.244 14:43:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # IFS=: 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # read -r var val 00:06:51.244 14:43:33 -- accel/accel.sh@20 -- # val=32 00:06:51.244 14:43:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # IFS=: 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # read -r var val 00:06:51.244 14:43:33 -- accel/accel.sh@20 -- # val=1 00:06:51.244 14:43:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # IFS=: 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # read -r var val 00:06:51.244 14:43:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.244 14:43:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # IFS=: 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # read -r var val 00:06:51.244 14:43:33 -- accel/accel.sh@20 -- # val=Yes 00:06:51.244 14:43:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # IFS=: 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # read -r var val 00:06:51.244 14:43:33 -- accel/accel.sh@20 -- # val= 00:06:51.244 14:43:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # IFS=: 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # read -r var val 00:06:51.244 14:43:33 -- accel/accel.sh@20 -- # val= 00:06:51.244 14:43:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # IFS=: 00:06:51.244 14:43:33 -- accel/accel.sh@19 -- # read -r var val 00:06:52.188 14:43:34 -- accel/accel.sh@20 -- # val= 00:06:52.188 14:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.188 14:43:34 -- accel/accel.sh@19 -- # IFS=: 00:06:52.188 14:43:34 -- accel/accel.sh@19 -- # read -r var val 00:06:52.188 14:43:34 -- accel/accel.sh@20 -- # val= 00:06:52.188 14:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.188 14:43:34 -- accel/accel.sh@19 -- # IFS=: 00:06:52.188 14:43:34 -- accel/accel.sh@19 -- # read -r var val 00:06:52.188 14:43:34 -- accel/accel.sh@20 -- # val= 00:06:52.188 14:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.188 14:43:34 -- accel/accel.sh@19 -- # IFS=: 00:06:52.188 14:43:34 -- accel/accel.sh@19 -- # read -r var val 00:06:52.188 14:43:34 -- accel/accel.sh@20 -- # val= 00:06:52.188 14:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.188 14:43:34 -- accel/accel.sh@19 -- # IFS=: 00:06:52.188 14:43:34 -- accel/accel.sh@19 -- # read -r var val 00:06:52.188 14:43:34 -- accel/accel.sh@20 -- # val= 00:06:52.188 14:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.188 14:43:34 -- accel/accel.sh@19 -- # IFS=: 00:06:52.188 14:43:34 -- accel/accel.sh@19 -- # read -r var val 00:06:52.188 14:43:34 -- accel/accel.sh@20 -- # val= 00:06:52.188 14:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.188 14:43:34 -- accel/accel.sh@19 -- # IFS=: 00:06:52.188 14:43:34 -- accel/accel.sh@19 -- # read -r var val 00:06:52.188 14:43:34 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.188 14:43:34 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:52.188 14:43:34 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.188 00:06:52.188 real 0m1.293s 00:06:52.188 user 0m1.198s 00:06:52.188 sys 0m0.106s 00:06:52.188 14:43:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:52.188 14:43:34 -- common/autotest_common.sh@10 -- # set +x 00:06:52.188 ************************************ 00:06:52.188 END TEST accel_xor 00:06:52.188 ************************************ 00:06:52.188 14:43:34 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:52.188 14:43:34 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:52.188 14:43:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.188 14:43:34 -- common/autotest_common.sh@10 -- # set +x 00:06:52.448 ************************************ 00:06:52.448 START TEST accel_xor 00:06:52.448 ************************************ 00:06:52.448 14:43:34 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:06:52.448 14:43:34 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.449 14:43:34 -- accel/accel.sh@17 -- # local accel_module 00:06:52.449 14:43:34 -- accel/accel.sh@19 -- # IFS=: 00:06:52.449 14:43:34 -- accel/accel.sh@19 -- # read -r var val 00:06:52.449 14:43:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:52.449 14:43:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:52.449 14:43:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.449 14:43:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.449 14:43:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.449 14:43:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.449 14:43:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.449 14:43:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.449 14:43:34 -- accel/accel.sh@40 -- # local IFS=, 00:06:52.449 14:43:34 -- accel/accel.sh@41 -- # jq -r . 00:06:52.449 [2024-04-26 14:43:34.927945] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:52.449 [2024-04-26 14:43:34.928005] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid879658 ] 00:06:52.449 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.449 [2024-04-26 14:43:34.992708] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.449 [2024-04-26 14:43:35.056064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.449 14:43:35 -- accel/accel.sh@20 -- # val= 00:06:52.449 14:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # IFS=: 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # read -r var val 00:06:52.449 14:43:35 -- accel/accel.sh@20 -- # val= 00:06:52.449 14:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # IFS=: 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # read -r var val 00:06:52.449 14:43:35 -- accel/accel.sh@20 -- # val=0x1 00:06:52.449 14:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # IFS=: 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # read -r var val 00:06:52.449 14:43:35 -- accel/accel.sh@20 -- # val= 00:06:52.449 14:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # IFS=: 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # read -r var val 00:06:52.449 14:43:35 -- accel/accel.sh@20 -- # val= 00:06:52.449 14:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # IFS=: 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # read -r var val 00:06:52.449 14:43:35 -- accel/accel.sh@20 -- # val=xor 00:06:52.449 14:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.449 14:43:35 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # IFS=: 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # read -r var val 00:06:52.449 14:43:35 -- accel/accel.sh@20 -- # val=3 00:06:52.449 14:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # IFS=: 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # read -r var val 00:06:52.449 14:43:35 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.449 14:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # IFS=: 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # read -r var val 00:06:52.449 14:43:35 -- accel/accel.sh@20 -- # val= 00:06:52.449 14:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # IFS=: 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # read -r var val 00:06:52.449 14:43:35 -- accel/accel.sh@20 -- # val=software 00:06:52.449 14:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.449 14:43:35 -- accel/accel.sh@22 -- # accel_module=software 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # IFS=: 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # read -r var val 00:06:52.449 14:43:35 -- accel/accel.sh@20 -- # val=32 00:06:52.449 14:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # IFS=: 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # read -r var val 00:06:52.449 14:43:35 -- accel/accel.sh@20 -- # val=32 00:06:52.449 14:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # IFS=: 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # read -r var val 00:06:52.449 14:43:35 -- accel/accel.sh@20 -- # val=1 00:06:52.449 14:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # IFS=: 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # read -r var val 00:06:52.449 14:43:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.449 14:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # IFS=: 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # read -r var val 00:06:52.449 14:43:35 -- accel/accel.sh@20 -- # val=Yes 00:06:52.449 14:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # IFS=: 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # read -r var val 00:06:52.449 14:43:35 -- accel/accel.sh@20 -- # val= 00:06:52.449 14:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # IFS=: 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # read -r var val 00:06:52.449 14:43:35 -- accel/accel.sh@20 -- # val= 00:06:52.449 14:43:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # IFS=: 00:06:52.449 14:43:35 -- accel/accel.sh@19 -- # read -r var val 00:06:53.835 14:43:36 -- accel/accel.sh@20 -- # val= 00:06:53.835 14:43:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.835 14:43:36 -- accel/accel.sh@19 -- # IFS=: 00:06:53.835 14:43:36 -- accel/accel.sh@19 -- # read -r var val 00:06:53.835 14:43:36 -- accel/accel.sh@20 -- # val= 00:06:53.835 14:43:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.835 14:43:36 -- accel/accel.sh@19 -- # IFS=: 00:06:53.835 14:43:36 -- accel/accel.sh@19 -- # read -r var val 00:06:53.835 14:43:36 -- accel/accel.sh@20 -- # val= 00:06:53.835 14:43:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.835 14:43:36 -- accel/accel.sh@19 -- # IFS=: 00:06:53.835 14:43:36 -- accel/accel.sh@19 -- # read -r var val 00:06:53.835 14:43:36 -- accel/accel.sh@20 -- # val= 00:06:53.835 14:43:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.835 14:43:36 -- accel/accel.sh@19 -- # IFS=: 00:06:53.835 14:43:36 -- accel/accel.sh@19 -- # read -r var val 00:06:53.835 14:43:36 -- accel/accel.sh@20 -- # val= 00:06:53.835 14:43:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.835 14:43:36 -- accel/accel.sh@19 -- # IFS=: 00:06:53.835 14:43:36 -- accel/accel.sh@19 -- # read -r var val 00:06:53.835 14:43:36 -- accel/accel.sh@20 -- # val= 00:06:53.835 14:43:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.835 14:43:36 -- accel/accel.sh@19 -- # IFS=: 00:06:53.835 14:43:36 -- accel/accel.sh@19 -- # read -r var val 00:06:53.835 14:43:36 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.835 14:43:36 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:53.835 14:43:36 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.835 00:06:53.835 real 0m1.285s 00:06:53.835 user 0m1.191s 00:06:53.835 sys 0m0.102s 00:06:53.835 14:43:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:53.835 14:43:36 -- common/autotest_common.sh@10 -- # set +x 00:06:53.835 ************************************ 00:06:53.835 END TEST accel_xor 00:06:53.835 ************************************ 00:06:53.835 14:43:36 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:53.835 14:43:36 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:53.835 14:43:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.835 14:43:36 -- common/autotest_common.sh@10 -- # set +x 00:06:53.835 ************************************ 00:06:53.835 START TEST accel_dif_verify 00:06:53.835 ************************************ 00:06:53.835 14:43:36 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:06:53.835 14:43:36 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.835 14:43:36 -- accel/accel.sh@17 -- # local accel_module 00:06:53.835 14:43:36 -- accel/accel.sh@19 -- # IFS=: 00:06:53.835 14:43:36 -- accel/accel.sh@19 -- # read -r var val 00:06:53.835 14:43:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:53.835 14:43:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:53.835 14:43:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.835 14:43:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.835 14:43:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.835 14:43:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.835 14:43:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.835 14:43:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.835 14:43:36 -- accel/accel.sh@40 -- # local IFS=, 00:06:53.835 14:43:36 -- accel/accel.sh@41 -- # jq -r . 00:06:53.835 [2024-04-26 14:43:36.401143] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:53.835 [2024-04-26 14:43:36.401208] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid879901 ] 00:06:53.835 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.835 [2024-04-26 14:43:36.467055] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.096 [2024-04-26 14:43:36.539451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.096 14:43:36 -- accel/accel.sh@20 -- # val= 00:06:54.096 14:43:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.096 14:43:36 -- accel/accel.sh@19 -- # IFS=: 00:06:54.096 14:43:36 -- accel/accel.sh@19 -- # read -r var val 00:06:54.096 14:43:36 -- accel/accel.sh@20 -- # val= 00:06:54.096 14:43:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.096 14:43:36 -- accel/accel.sh@19 -- # IFS=: 00:06:54.096 14:43:36 -- accel/accel.sh@19 -- # read -r var val 00:06:54.096 14:43:36 -- accel/accel.sh@20 -- # val=0x1 00:06:54.096 14:43:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.096 14:43:36 -- accel/accel.sh@19 -- # IFS=: 00:06:54.096 14:43:36 -- accel/accel.sh@19 -- # read -r var val 00:06:54.096 14:43:36 -- accel/accel.sh@20 -- # val= 00:06:54.096 14:43:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.096 14:43:36 -- accel/accel.sh@19 -- # IFS=: 00:06:54.096 14:43:36 -- accel/accel.sh@19 -- # read -r var val 00:06:54.096 14:43:36 -- accel/accel.sh@20 -- # val= 00:06:54.096 14:43:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.096 14:43:36 -- accel/accel.sh@19 -- # IFS=: 00:06:54.096 14:43:36 -- accel/accel.sh@19 -- # read -r var val 00:06:54.096 14:43:36 -- accel/accel.sh@20 -- # val=dif_verify 00:06:54.096 14:43:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.096 14:43:36 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:54.096 14:43:36 -- accel/accel.sh@19 -- # IFS=: 00:06:54.096 14:43:36 -- accel/accel.sh@19 -- # read -r var val 00:06:54.096 14:43:36 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.096 14:43:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.096 14:43:36 -- accel/accel.sh@19 -- # IFS=: 00:06:54.096 14:43:36 -- accel/accel.sh@19 -- # read -r var val 00:06:54.096 14:43:36 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.096 14:43:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.096 14:43:36 -- accel/accel.sh@19 -- # IFS=: 00:06:54.096 14:43:36 -- accel/accel.sh@19 -- # read -r var val 00:06:54.096 14:43:36 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:54.096 14:43:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.096 14:43:36 -- accel/accel.sh@19 -- # IFS=: 00:06:54.096 14:43:36 -- accel/accel.sh@19 -- # read -r var val 00:06:54.096 14:43:36 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:54.096 14:43:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.096 14:43:36 -- accel/accel.sh@19 -- # IFS=: 00:06:54.096 14:43:36 -- accel/accel.sh@19 -- # read -r var val 00:06:54.096 14:43:36 -- accel/accel.sh@20 -- # val= 00:06:54.096 14:43:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.096 14:43:36 -- accel/accel.sh@19 -- # IFS=: 00:06:54.096 14:43:36 -- accel/accel.sh@19 -- # read -r var val 00:06:54.096 14:43:36 -- accel/accel.sh@20 -- # val=software 00:06:54.096 14:43:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.096 14:43:36 -- accel/accel.sh@22 -- # accel_module=software 00:06:54.096 14:43:36 -- accel/accel.sh@19 -- # IFS=: 00:06:54.096 14:43:36 -- accel/accel.sh@19 -- # read -r var val 00:06:54.096 14:43:36 -- accel/accel.sh@20 -- # val=32 00:06:54.096 14:43:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.096 14:43:36 -- accel/accel.sh@19 -- # IFS=: 00:06:54.096 14:43:36 -- accel/accel.sh@19 -- # read -r var val 00:06:54.096 14:43:36 -- accel/accel.sh@20 -- # val=32 00:06:54.096 14:43:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.097 14:43:36 -- accel/accel.sh@19 -- # IFS=: 00:06:54.097 14:43:36 -- accel/accel.sh@19 -- # read -r var val 00:06:54.097 14:43:36 -- accel/accel.sh@20 -- # val=1 00:06:54.097 14:43:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.097 14:43:36 -- accel/accel.sh@19 -- # IFS=: 00:06:54.097 14:43:36 -- accel/accel.sh@19 -- # read -r var val 00:06:54.097 14:43:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.097 14:43:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.097 14:43:36 -- accel/accel.sh@19 -- # IFS=: 00:06:54.097 14:43:36 -- accel/accel.sh@19 -- # read -r var val 00:06:54.097 14:43:36 -- accel/accel.sh@20 -- # val=No 00:06:54.097 14:43:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.097 14:43:36 -- accel/accel.sh@19 -- # IFS=: 00:06:54.097 14:43:36 -- accel/accel.sh@19 -- # read -r var val 00:06:54.097 14:43:36 -- accel/accel.sh@20 -- # val= 00:06:54.097 14:43:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.097 14:43:36 -- accel/accel.sh@19 -- # IFS=: 00:06:54.097 14:43:36 -- accel/accel.sh@19 -- # read -r var val 00:06:54.097 14:43:36 -- accel/accel.sh@20 -- # val= 00:06:54.097 14:43:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.097 14:43:36 -- accel/accel.sh@19 -- # IFS=: 00:06:54.097 14:43:36 -- accel/accel.sh@19 -- # read -r var val 00:06:55.039 14:43:37 -- accel/accel.sh@20 -- # val= 00:06:55.039 14:43:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.039 14:43:37 -- accel/accel.sh@19 -- # IFS=: 00:06:55.039 14:43:37 -- accel/accel.sh@19 -- # read -r var val 00:06:55.039 14:43:37 -- accel/accel.sh@20 -- # val= 00:06:55.039 14:43:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.039 14:43:37 -- accel/accel.sh@19 -- # IFS=: 00:06:55.039 14:43:37 -- accel/accel.sh@19 -- # read -r var val 00:06:55.039 14:43:37 -- accel/accel.sh@20 -- # val= 00:06:55.039 14:43:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.039 14:43:37 -- accel/accel.sh@19 -- # IFS=: 00:06:55.039 14:43:37 -- accel/accel.sh@19 -- # read -r var val 00:06:55.039 14:43:37 -- accel/accel.sh@20 -- # val= 00:06:55.039 14:43:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.039 14:43:37 -- accel/accel.sh@19 -- # IFS=: 00:06:55.039 14:43:37 -- accel/accel.sh@19 -- # read -r var val 00:06:55.039 14:43:37 -- accel/accel.sh@20 -- # val= 00:06:55.039 14:43:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.039 14:43:37 -- accel/accel.sh@19 -- # IFS=: 00:06:55.039 14:43:37 -- accel/accel.sh@19 -- # read -r var val 00:06:55.039 14:43:37 -- accel/accel.sh@20 -- # val= 00:06:55.039 14:43:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.039 14:43:37 -- accel/accel.sh@19 -- # IFS=: 00:06:55.039 14:43:37 -- accel/accel.sh@19 -- # read -r var val 00:06:55.039 14:43:37 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.039 14:43:37 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:55.039 14:43:37 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.039 00:06:55.039 real 0m1.298s 00:06:55.039 user 0m1.203s 00:06:55.039 sys 0m0.107s 00:06:55.039 14:43:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:55.039 14:43:37 -- common/autotest_common.sh@10 -- # set +x 00:06:55.039 ************************************ 00:06:55.039 END TEST accel_dif_verify 00:06:55.039 ************************************ 00:06:55.300 14:43:37 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:55.300 14:43:37 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:55.300 14:43:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.300 14:43:37 -- common/autotest_common.sh@10 -- # set +x 00:06:55.300 ************************************ 00:06:55.300 START TEST accel_dif_generate 00:06:55.300 ************************************ 00:06:55.300 14:43:37 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:06:55.300 14:43:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.300 14:43:37 -- accel/accel.sh@17 -- # local accel_module 00:06:55.300 14:43:37 -- accel/accel.sh@19 -- # IFS=: 00:06:55.300 14:43:37 -- accel/accel.sh@19 -- # read -r var val 00:06:55.300 14:43:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:55.300 14:43:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:55.300 14:43:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.300 14:43:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.300 14:43:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.300 14:43:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.300 14:43:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.300 14:43:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.300 14:43:37 -- accel/accel.sh@40 -- # local IFS=, 00:06:55.300 14:43:37 -- accel/accel.sh@41 -- # jq -r . 00:06:55.300 [2024-04-26 14:43:37.878803] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:55.300 [2024-04-26 14:43:37.878875] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid880164 ] 00:06:55.300 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.300 [2024-04-26 14:43:37.944019] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.561 [2024-04-26 14:43:38.018059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.561 14:43:38 -- accel/accel.sh@20 -- # val= 00:06:55.561 14:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.561 14:43:38 -- accel/accel.sh@19 -- # IFS=: 00:06:55.561 14:43:38 -- accel/accel.sh@19 -- # read -r var val 00:06:55.561 14:43:38 -- accel/accel.sh@20 -- # val= 00:06:55.561 14:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.561 14:43:38 -- accel/accel.sh@19 -- # IFS=: 00:06:55.561 14:43:38 -- accel/accel.sh@19 -- # read -r var val 00:06:55.561 14:43:38 -- accel/accel.sh@20 -- # val=0x1 00:06:55.561 14:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.561 14:43:38 -- accel/accel.sh@19 -- # IFS=: 00:06:55.561 14:43:38 -- accel/accel.sh@19 -- # read -r var val 00:06:55.561 14:43:38 -- accel/accel.sh@20 -- # val= 00:06:55.561 14:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.561 14:43:38 -- accel/accel.sh@19 -- # IFS=: 00:06:55.561 14:43:38 -- accel/accel.sh@19 -- # read -r var val 00:06:55.561 14:43:38 -- accel/accel.sh@20 -- # val= 00:06:55.561 14:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.561 14:43:38 -- accel/accel.sh@19 -- # IFS=: 00:06:55.561 14:43:38 -- accel/accel.sh@19 -- # read -r var val 00:06:55.561 14:43:38 -- accel/accel.sh@20 -- # val=dif_generate 00:06:55.561 14:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.561 14:43:38 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:55.561 14:43:38 -- accel/accel.sh@19 -- # IFS=: 00:06:55.561 14:43:38 -- accel/accel.sh@19 -- # read -r var val 00:06:55.561 14:43:38 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.561 14:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.561 14:43:38 -- accel/accel.sh@19 -- # IFS=: 00:06:55.561 14:43:38 -- accel/accel.sh@19 -- # read -r var val 00:06:55.561 14:43:38 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.561 14:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.561 14:43:38 -- accel/accel.sh@19 -- # IFS=: 00:06:55.561 14:43:38 -- accel/accel.sh@19 -- # read -r var val 00:06:55.561 14:43:38 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:55.561 14:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.561 14:43:38 -- accel/accel.sh@19 -- # IFS=: 00:06:55.561 14:43:38 -- accel/accel.sh@19 -- # read -r var val 00:06:55.561 14:43:38 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:55.561 14:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.561 14:43:38 -- accel/accel.sh@19 -- # IFS=: 00:06:55.561 14:43:38 -- accel/accel.sh@19 -- # read -r var val 00:06:55.561 14:43:38 -- accel/accel.sh@20 -- # val= 00:06:55.561 14:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.562 14:43:38 -- accel/accel.sh@19 -- # IFS=: 00:06:55.562 14:43:38 -- accel/accel.sh@19 -- # read -r var val 00:06:55.562 14:43:38 -- accel/accel.sh@20 -- # val=software 00:06:55.562 14:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.562 14:43:38 -- accel/accel.sh@22 -- # accel_module=software 00:06:55.562 14:43:38 -- accel/accel.sh@19 -- # IFS=: 00:06:55.562 14:43:38 -- accel/accel.sh@19 -- # read -r var val 00:06:55.562 14:43:38 -- accel/accel.sh@20 -- # val=32 00:06:55.562 14:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.562 14:43:38 -- accel/accel.sh@19 -- # IFS=: 00:06:55.562 14:43:38 -- accel/accel.sh@19 -- # read -r var val 00:06:55.562 14:43:38 -- accel/accel.sh@20 -- # val=32 00:06:55.562 14:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.562 14:43:38 -- accel/accel.sh@19 -- # IFS=: 00:06:55.562 14:43:38 -- accel/accel.sh@19 -- # read -r var val 00:06:55.562 14:43:38 -- accel/accel.sh@20 -- # val=1 00:06:55.562 14:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.562 14:43:38 -- accel/accel.sh@19 -- # IFS=: 00:06:55.562 14:43:38 -- accel/accel.sh@19 -- # read -r var val 00:06:55.562 14:43:38 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.562 14:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.562 14:43:38 -- accel/accel.sh@19 -- # IFS=: 00:06:55.562 14:43:38 -- accel/accel.sh@19 -- # read -r var val 00:06:55.562 14:43:38 -- accel/accel.sh@20 -- # val=No 00:06:55.562 14:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.562 14:43:38 -- accel/accel.sh@19 -- # IFS=: 00:06:55.562 14:43:38 -- accel/accel.sh@19 -- # read -r var val 00:06:55.562 14:43:38 -- accel/accel.sh@20 -- # val= 00:06:55.562 14:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.562 14:43:38 -- accel/accel.sh@19 -- # IFS=: 00:06:55.562 14:43:38 -- accel/accel.sh@19 -- # read -r var val 00:06:55.562 14:43:38 -- accel/accel.sh@20 -- # val= 00:06:55.562 14:43:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.562 14:43:38 -- accel/accel.sh@19 -- # IFS=: 00:06:55.562 14:43:38 -- accel/accel.sh@19 -- # read -r var val 00:06:56.503 14:43:39 -- accel/accel.sh@20 -- # val= 00:06:56.503 14:43:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.503 14:43:39 -- accel/accel.sh@19 -- # IFS=: 00:06:56.503 14:43:39 -- accel/accel.sh@19 -- # read -r var val 00:06:56.503 14:43:39 -- accel/accel.sh@20 -- # val= 00:06:56.503 14:43:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.503 14:43:39 -- accel/accel.sh@19 -- # IFS=: 00:06:56.503 14:43:39 -- accel/accel.sh@19 -- # read -r var val 00:06:56.503 14:43:39 -- accel/accel.sh@20 -- # val= 00:06:56.503 14:43:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.503 14:43:39 -- accel/accel.sh@19 -- # IFS=: 00:06:56.503 14:43:39 -- accel/accel.sh@19 -- # read -r var val 00:06:56.503 14:43:39 -- accel/accel.sh@20 -- # val= 00:06:56.503 14:43:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.503 14:43:39 -- accel/accel.sh@19 -- # IFS=: 00:06:56.503 14:43:39 -- accel/accel.sh@19 -- # read -r var val 00:06:56.503 14:43:39 -- accel/accel.sh@20 -- # val= 00:06:56.503 14:43:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.503 14:43:39 -- accel/accel.sh@19 -- # IFS=: 00:06:56.503 14:43:39 -- accel/accel.sh@19 -- # read -r var val 00:06:56.503 14:43:39 -- accel/accel.sh@20 -- # val= 00:06:56.503 14:43:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.503 14:43:39 -- accel/accel.sh@19 -- # IFS=: 00:06:56.503 14:43:39 -- accel/accel.sh@19 -- # read -r var val 00:06:56.503 14:43:39 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.503 14:43:39 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:56.503 14:43:39 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.503 00:06:56.503 real 0m1.297s 00:06:56.503 user 0m1.199s 00:06:56.503 sys 0m0.110s 00:06:56.503 14:43:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:56.503 14:43:39 -- common/autotest_common.sh@10 -- # set +x 00:06:56.503 ************************************ 00:06:56.504 END TEST accel_dif_generate 00:06:56.504 ************************************ 00:06:56.765 14:43:39 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:56.765 14:43:39 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:56.765 14:43:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.765 14:43:39 -- common/autotest_common.sh@10 -- # set +x 00:06:56.765 ************************************ 00:06:56.765 START TEST accel_dif_generate_copy 00:06:56.765 ************************************ 00:06:56.765 14:43:39 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:06:56.765 14:43:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.765 14:43:39 -- accel/accel.sh@17 -- # local accel_module 00:06:56.765 14:43:39 -- accel/accel.sh@19 -- # IFS=: 00:06:56.765 14:43:39 -- accel/accel.sh@19 -- # read -r var val 00:06:56.765 14:43:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:56.765 14:43:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:56.765 14:43:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.765 14:43:39 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.765 14:43:39 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.765 14:43:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.765 14:43:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.765 14:43:39 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.765 14:43:39 -- accel/accel.sh@40 -- # local IFS=, 00:06:56.765 14:43:39 -- accel/accel.sh@41 -- # jq -r . 00:06:56.765 [2024-04-26 14:43:39.359626] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:56.765 [2024-04-26 14:43:39.359695] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid880522 ] 00:06:56.765 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.765 [2024-04-26 14:43:39.425196] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.027 [2024-04-26 14:43:39.496095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.027 14:43:39 -- accel/accel.sh@20 -- # val= 00:06:57.027 14:43:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # IFS=: 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # read -r var val 00:06:57.027 14:43:39 -- accel/accel.sh@20 -- # val= 00:06:57.027 14:43:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # IFS=: 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # read -r var val 00:06:57.027 14:43:39 -- accel/accel.sh@20 -- # val=0x1 00:06:57.027 14:43:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # IFS=: 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # read -r var val 00:06:57.027 14:43:39 -- accel/accel.sh@20 -- # val= 00:06:57.027 14:43:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # IFS=: 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # read -r var val 00:06:57.027 14:43:39 -- accel/accel.sh@20 -- # val= 00:06:57.027 14:43:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # IFS=: 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # read -r var val 00:06:57.027 14:43:39 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:57.027 14:43:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.027 14:43:39 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # IFS=: 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # read -r var val 00:06:57.027 14:43:39 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.027 14:43:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # IFS=: 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # read -r var val 00:06:57.027 14:43:39 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.027 14:43:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # IFS=: 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # read -r var val 00:06:57.027 14:43:39 -- accel/accel.sh@20 -- # val= 00:06:57.027 14:43:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # IFS=: 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # read -r var val 00:06:57.027 14:43:39 -- accel/accel.sh@20 -- # val=software 00:06:57.027 14:43:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.027 14:43:39 -- accel/accel.sh@22 -- # accel_module=software 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # IFS=: 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # read -r var val 00:06:57.027 14:43:39 -- accel/accel.sh@20 -- # val=32 00:06:57.027 14:43:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # IFS=: 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # read -r var val 00:06:57.027 14:43:39 -- accel/accel.sh@20 -- # val=32 00:06:57.027 14:43:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # IFS=: 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # read -r var val 00:06:57.027 14:43:39 -- accel/accel.sh@20 -- # val=1 00:06:57.027 14:43:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # IFS=: 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # read -r var val 00:06:57.027 14:43:39 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.027 14:43:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # IFS=: 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # read -r var val 00:06:57.027 14:43:39 -- accel/accel.sh@20 -- # val=No 00:06:57.027 14:43:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # IFS=: 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # read -r var val 00:06:57.027 14:43:39 -- accel/accel.sh@20 -- # val= 00:06:57.027 14:43:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # IFS=: 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # read -r var val 00:06:57.027 14:43:39 -- accel/accel.sh@20 -- # val= 00:06:57.027 14:43:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # IFS=: 00:06:57.027 14:43:39 -- accel/accel.sh@19 -- # read -r var val 00:06:57.967 14:43:40 -- accel/accel.sh@20 -- # val= 00:06:57.968 14:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.968 14:43:40 -- accel/accel.sh@19 -- # IFS=: 00:06:57.968 14:43:40 -- accel/accel.sh@19 -- # read -r var val 00:06:57.968 14:43:40 -- accel/accel.sh@20 -- # val= 00:06:57.968 14:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.968 14:43:40 -- accel/accel.sh@19 -- # IFS=: 00:06:57.968 14:43:40 -- accel/accel.sh@19 -- # read -r var val 00:06:57.968 14:43:40 -- accel/accel.sh@20 -- # val= 00:06:57.968 14:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.968 14:43:40 -- accel/accel.sh@19 -- # IFS=: 00:06:57.968 14:43:40 -- accel/accel.sh@19 -- # read -r var val 00:06:57.968 14:43:40 -- accel/accel.sh@20 -- # val= 00:06:57.968 14:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.968 14:43:40 -- accel/accel.sh@19 -- # IFS=: 00:06:57.968 14:43:40 -- accel/accel.sh@19 -- # read -r var val 00:06:57.968 14:43:40 -- accel/accel.sh@20 -- # val= 00:06:57.968 14:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.968 14:43:40 -- accel/accel.sh@19 -- # IFS=: 00:06:57.968 14:43:40 -- accel/accel.sh@19 -- # read -r var val 00:06:57.968 14:43:40 -- accel/accel.sh@20 -- # val= 00:06:57.968 14:43:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.968 14:43:40 -- accel/accel.sh@19 -- # IFS=: 00:06:57.968 14:43:40 -- accel/accel.sh@19 -- # read -r var val 00:06:57.968 14:43:40 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.968 14:43:40 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:57.968 14:43:40 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.968 00:06:57.968 real 0m1.296s 00:06:57.968 user 0m1.197s 00:06:57.968 sys 0m0.109s 00:06:57.968 14:43:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:57.968 14:43:40 -- common/autotest_common.sh@10 -- # set +x 00:06:57.968 ************************************ 00:06:57.968 END TEST accel_dif_generate_copy 00:06:57.968 ************************************ 00:06:58.228 14:43:40 -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:58.228 14:43:40 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.228 14:43:40 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:58.228 14:43:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.228 14:43:40 -- common/autotest_common.sh@10 -- # set +x 00:06:58.228 ************************************ 00:06:58.228 START TEST accel_comp 00:06:58.228 ************************************ 00:06:58.228 14:43:40 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.228 14:43:40 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.228 14:43:40 -- accel/accel.sh@17 -- # local accel_module 00:06:58.228 14:43:40 -- accel/accel.sh@19 -- # IFS=: 00:06:58.228 14:43:40 -- accel/accel.sh@19 -- # read -r var val 00:06:58.228 14:43:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.228 14:43:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.228 14:43:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.228 14:43:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.228 14:43:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.228 14:43:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.228 14:43:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.228 14:43:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.228 14:43:40 -- accel/accel.sh@40 -- # local IFS=, 00:06:58.228 14:43:40 -- accel/accel.sh@41 -- # jq -r . 00:06:58.229 [2024-04-26 14:43:40.849368] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:58.229 [2024-04-26 14:43:40.849425] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid880878 ] 00:06:58.229 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.489 [2024-04-26 14:43:40.912337] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.489 [2024-04-26 14:43:40.974542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.489 14:43:41 -- accel/accel.sh@20 -- # val= 00:06:58.489 14:43:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # IFS=: 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # read -r var val 00:06:58.489 14:43:41 -- accel/accel.sh@20 -- # val= 00:06:58.489 14:43:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # IFS=: 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # read -r var val 00:06:58.489 14:43:41 -- accel/accel.sh@20 -- # val= 00:06:58.489 14:43:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # IFS=: 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # read -r var val 00:06:58.489 14:43:41 -- accel/accel.sh@20 -- # val=0x1 00:06:58.489 14:43:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # IFS=: 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # read -r var val 00:06:58.489 14:43:41 -- accel/accel.sh@20 -- # val= 00:06:58.489 14:43:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # IFS=: 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # read -r var val 00:06:58.489 14:43:41 -- accel/accel.sh@20 -- # val= 00:06:58.489 14:43:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # IFS=: 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # read -r var val 00:06:58.489 14:43:41 -- accel/accel.sh@20 -- # val=compress 00:06:58.489 14:43:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.489 14:43:41 -- accel/accel.sh@23 -- # accel_opc=compress 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # IFS=: 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # read -r var val 00:06:58.489 14:43:41 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.489 14:43:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # IFS=: 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # read -r var val 00:06:58.489 14:43:41 -- accel/accel.sh@20 -- # val= 00:06:58.489 14:43:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # IFS=: 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # read -r var val 00:06:58.489 14:43:41 -- accel/accel.sh@20 -- # val=software 00:06:58.489 14:43:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.489 14:43:41 -- accel/accel.sh@22 -- # accel_module=software 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # IFS=: 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # read -r var val 00:06:58.489 14:43:41 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.489 14:43:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # IFS=: 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # read -r var val 00:06:58.489 14:43:41 -- accel/accel.sh@20 -- # val=32 00:06:58.489 14:43:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # IFS=: 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # read -r var val 00:06:58.489 14:43:41 -- accel/accel.sh@20 -- # val=32 00:06:58.489 14:43:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # IFS=: 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # read -r var val 00:06:58.489 14:43:41 -- accel/accel.sh@20 -- # val=1 00:06:58.489 14:43:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # IFS=: 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # read -r var val 00:06:58.489 14:43:41 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.489 14:43:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # IFS=: 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # read -r var val 00:06:58.489 14:43:41 -- accel/accel.sh@20 -- # val=No 00:06:58.489 14:43:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # IFS=: 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # read -r var val 00:06:58.489 14:43:41 -- accel/accel.sh@20 -- # val= 00:06:58.489 14:43:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # IFS=: 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # read -r var val 00:06:58.489 14:43:41 -- accel/accel.sh@20 -- # val= 00:06:58.489 14:43:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # IFS=: 00:06:58.489 14:43:41 -- accel/accel.sh@19 -- # read -r var val 00:06:59.872 14:43:42 -- accel/accel.sh@20 -- # val= 00:06:59.872 14:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.872 14:43:42 -- accel/accel.sh@19 -- # IFS=: 00:06:59.872 14:43:42 -- accel/accel.sh@19 -- # read -r var val 00:06:59.872 14:43:42 -- accel/accel.sh@20 -- # val= 00:06:59.872 14:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.872 14:43:42 -- accel/accel.sh@19 -- # IFS=: 00:06:59.872 14:43:42 -- accel/accel.sh@19 -- # read -r var val 00:06:59.872 14:43:42 -- accel/accel.sh@20 -- # val= 00:06:59.872 14:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.872 14:43:42 -- accel/accel.sh@19 -- # IFS=: 00:06:59.872 14:43:42 -- accel/accel.sh@19 -- # read -r var val 00:06:59.872 14:43:42 -- accel/accel.sh@20 -- # val= 00:06:59.872 14:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.872 14:43:42 -- accel/accel.sh@19 -- # IFS=: 00:06:59.872 14:43:42 -- accel/accel.sh@19 -- # read -r var val 00:06:59.872 14:43:42 -- accel/accel.sh@20 -- # val= 00:06:59.872 14:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.872 14:43:42 -- accel/accel.sh@19 -- # IFS=: 00:06:59.872 14:43:42 -- accel/accel.sh@19 -- # read -r var val 00:06:59.872 14:43:42 -- accel/accel.sh@20 -- # val= 00:06:59.873 14:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # IFS=: 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # read -r var val 00:06:59.873 14:43:42 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.873 14:43:42 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:59.873 14:43:42 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.873 00:06:59.873 real 0m1.286s 00:06:59.873 user 0m1.194s 00:06:59.873 sys 0m0.104s 00:06:59.873 14:43:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:59.873 14:43:42 -- common/autotest_common.sh@10 -- # set +x 00:06:59.873 ************************************ 00:06:59.873 END TEST accel_comp 00:06:59.873 ************************************ 00:06:59.873 14:43:42 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:59.873 14:43:42 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:59.873 14:43:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.873 14:43:42 -- common/autotest_common.sh@10 -- # set +x 00:06:59.873 ************************************ 00:06:59.873 START TEST accel_decomp 00:06:59.873 ************************************ 00:06:59.873 14:43:42 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:59.873 14:43:42 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.873 14:43:42 -- accel/accel.sh@17 -- # local accel_module 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # IFS=: 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # read -r var val 00:06:59.873 14:43:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:59.873 14:43:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:59.873 14:43:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.873 14:43:42 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.873 14:43:42 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.873 14:43:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.873 14:43:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.873 14:43:42 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.873 14:43:42 -- accel/accel.sh@40 -- # local IFS=, 00:06:59.873 14:43:42 -- accel/accel.sh@41 -- # jq -r . 00:06:59.873 [2024-04-26 14:43:42.318261] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:59.873 [2024-04-26 14:43:42.318353] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid881239 ] 00:06:59.873 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.873 [2024-04-26 14:43:42.383030] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.873 [2024-04-26 14:43:42.451519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.873 14:43:42 -- accel/accel.sh@20 -- # val= 00:06:59.873 14:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # IFS=: 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # read -r var val 00:06:59.873 14:43:42 -- accel/accel.sh@20 -- # val= 00:06:59.873 14:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # IFS=: 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # read -r var val 00:06:59.873 14:43:42 -- accel/accel.sh@20 -- # val= 00:06:59.873 14:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # IFS=: 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # read -r var val 00:06:59.873 14:43:42 -- accel/accel.sh@20 -- # val=0x1 00:06:59.873 14:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # IFS=: 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # read -r var val 00:06:59.873 14:43:42 -- accel/accel.sh@20 -- # val= 00:06:59.873 14:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # IFS=: 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # read -r var val 00:06:59.873 14:43:42 -- accel/accel.sh@20 -- # val= 00:06:59.873 14:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # IFS=: 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # read -r var val 00:06:59.873 14:43:42 -- accel/accel.sh@20 -- # val=decompress 00:06:59.873 14:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.873 14:43:42 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # IFS=: 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # read -r var val 00:06:59.873 14:43:42 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.873 14:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # IFS=: 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # read -r var val 00:06:59.873 14:43:42 -- accel/accel.sh@20 -- # val= 00:06:59.873 14:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # IFS=: 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # read -r var val 00:06:59.873 14:43:42 -- accel/accel.sh@20 -- # val=software 00:06:59.873 14:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.873 14:43:42 -- accel/accel.sh@22 -- # accel_module=software 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # IFS=: 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # read -r var val 00:06:59.873 14:43:42 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:59.873 14:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # IFS=: 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # read -r var val 00:06:59.873 14:43:42 -- accel/accel.sh@20 -- # val=32 00:06:59.873 14:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # IFS=: 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # read -r var val 00:06:59.873 14:43:42 -- accel/accel.sh@20 -- # val=32 00:06:59.873 14:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # IFS=: 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # read -r var val 00:06:59.873 14:43:42 -- accel/accel.sh@20 -- # val=1 00:06:59.873 14:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # IFS=: 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # read -r var val 00:06:59.873 14:43:42 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.873 14:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # IFS=: 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # read -r var val 00:06:59.873 14:43:42 -- accel/accel.sh@20 -- # val=Yes 00:06:59.873 14:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # IFS=: 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # read -r var val 00:06:59.873 14:43:42 -- accel/accel.sh@20 -- # val= 00:06:59.873 14:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # IFS=: 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # read -r var val 00:06:59.873 14:43:42 -- accel/accel.sh@20 -- # val= 00:06:59.873 14:43:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # IFS=: 00:06:59.873 14:43:42 -- accel/accel.sh@19 -- # read -r var val 00:07:01.262 14:43:43 -- accel/accel.sh@20 -- # val= 00:07:01.262 14:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.262 14:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:01.262 14:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:01.262 14:43:43 -- accel/accel.sh@20 -- # val= 00:07:01.262 14:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.262 14:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:01.262 14:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:01.262 14:43:43 -- accel/accel.sh@20 -- # val= 00:07:01.262 14:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.262 14:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:01.262 14:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:01.262 14:43:43 -- accel/accel.sh@20 -- # val= 00:07:01.262 14:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.262 14:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:01.262 14:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:01.262 14:43:43 -- accel/accel.sh@20 -- # val= 00:07:01.262 14:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.262 14:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:01.262 14:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:01.262 14:43:43 -- accel/accel.sh@20 -- # val= 00:07:01.262 14:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.262 14:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:01.262 14:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:01.262 14:43:43 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.262 14:43:43 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:01.262 14:43:43 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.262 00:07:01.262 real 0m1.294s 00:07:01.262 user 0m1.210s 00:07:01.262 sys 0m0.096s 00:07:01.262 14:43:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:01.262 14:43:43 -- common/autotest_common.sh@10 -- # set +x 00:07:01.262 ************************************ 00:07:01.262 END TEST accel_decomp 00:07:01.262 ************************************ 00:07:01.262 14:43:43 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:01.262 14:43:43 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:01.262 14:43:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.262 14:43:43 -- common/autotest_common.sh@10 -- # set +x 00:07:01.262 ************************************ 00:07:01.262 START TEST accel_decmop_full 00:07:01.262 ************************************ 00:07:01.262 14:43:43 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:01.262 14:43:43 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.262 14:43:43 -- accel/accel.sh@17 -- # local accel_module 00:07:01.262 14:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:01.262 14:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:01.262 14:43:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:01.262 14:43:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:01.262 14:43:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.262 14:43:43 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.262 14:43:43 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.262 14:43:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.262 14:43:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.262 14:43:43 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.262 14:43:43 -- accel/accel.sh@40 -- # local IFS=, 00:07:01.262 14:43:43 -- accel/accel.sh@41 -- # jq -r . 00:07:01.262 [2024-04-26 14:43:43.798742] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:01.262 [2024-04-26 14:43:43.798815] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid881597 ] 00:07:01.262 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.262 [2024-04-26 14:43:43.863358] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.522 [2024-04-26 14:43:43.927756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.522 14:43:43 -- accel/accel.sh@20 -- # val= 00:07:01.522 14:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:01.522 14:43:43 -- accel/accel.sh@20 -- # val= 00:07:01.522 14:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:01.522 14:43:43 -- accel/accel.sh@20 -- # val= 00:07:01.522 14:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:01.522 14:43:43 -- accel/accel.sh@20 -- # val=0x1 00:07:01.522 14:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:01.522 14:43:43 -- accel/accel.sh@20 -- # val= 00:07:01.522 14:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:01.522 14:43:43 -- accel/accel.sh@20 -- # val= 00:07:01.522 14:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:01.522 14:43:43 -- accel/accel.sh@20 -- # val=decompress 00:07:01.522 14:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.522 14:43:43 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:01.522 14:43:43 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:01.522 14:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:01.522 14:43:43 -- accel/accel.sh@20 -- # val= 00:07:01.522 14:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:01.522 14:43:43 -- accel/accel.sh@20 -- # val=software 00:07:01.522 14:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.522 14:43:43 -- accel/accel.sh@22 -- # accel_module=software 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:01.522 14:43:43 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:01.522 14:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:01.522 14:43:43 -- accel/accel.sh@20 -- # val=32 00:07:01.522 14:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:01.522 14:43:43 -- accel/accel.sh@20 -- # val=32 00:07:01.522 14:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:01.522 14:43:43 -- accel/accel.sh@20 -- # val=1 00:07:01.522 14:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:01.522 14:43:43 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.522 14:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:01.522 14:43:43 -- accel/accel.sh@20 -- # val=Yes 00:07:01.522 14:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:01.522 14:43:43 -- accel/accel.sh@20 -- # val= 00:07:01.522 14:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:01.522 14:43:43 -- accel/accel.sh@20 -- # val= 00:07:01.522 14:43:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # IFS=: 00:07:01.522 14:43:43 -- accel/accel.sh@19 -- # read -r var val 00:07:02.464 14:43:45 -- accel/accel.sh@20 -- # val= 00:07:02.464 14:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.464 14:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:02.464 14:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:02.464 14:43:45 -- accel/accel.sh@20 -- # val= 00:07:02.464 14:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.464 14:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:02.464 14:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:02.464 14:43:45 -- accel/accel.sh@20 -- # val= 00:07:02.464 14:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.464 14:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:02.464 14:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:02.464 14:43:45 -- accel/accel.sh@20 -- # val= 00:07:02.464 14:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.464 14:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:02.464 14:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:02.464 14:43:45 -- accel/accel.sh@20 -- # val= 00:07:02.464 14:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.464 14:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:02.464 14:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:02.464 14:43:45 -- accel/accel.sh@20 -- # val= 00:07:02.464 14:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.464 14:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:02.464 14:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:02.464 14:43:45 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.464 14:43:45 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:02.464 14:43:45 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.464 00:07:02.464 real 0m1.298s 00:07:02.464 user 0m1.203s 00:07:02.464 sys 0m0.107s 00:07:02.464 14:43:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:02.464 14:43:45 -- common/autotest_common.sh@10 -- # set +x 00:07:02.464 ************************************ 00:07:02.464 END TEST accel_decmop_full 00:07:02.464 ************************************ 00:07:02.465 14:43:45 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:02.465 14:43:45 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:02.465 14:43:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.465 14:43:45 -- common/autotest_common.sh@10 -- # set +x 00:07:02.725 ************************************ 00:07:02.725 START TEST accel_decomp_mcore 00:07:02.725 ************************************ 00:07:02.725 14:43:45 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:02.725 14:43:45 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.725 14:43:45 -- accel/accel.sh@17 -- # local accel_module 00:07:02.725 14:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:02.725 14:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:02.725 14:43:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:02.725 14:43:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:02.725 14:43:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.725 14:43:45 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.725 14:43:45 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.725 14:43:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.725 14:43:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.725 14:43:45 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.725 14:43:45 -- accel/accel.sh@40 -- # local IFS=, 00:07:02.725 14:43:45 -- accel/accel.sh@41 -- # jq -r . 00:07:02.725 [2024-04-26 14:43:45.283224] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:02.725 [2024-04-26 14:43:45.283292] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid881918 ] 00:07:02.725 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.725 [2024-04-26 14:43:45.349626] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:02.987 [2024-04-26 14:43:45.423720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.987 [2024-04-26 14:43:45.423858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.987 [2024-04-26 14:43:45.423957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.987 [2024-04-26 14:43:45.424123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.987 14:43:45 -- accel/accel.sh@20 -- # val= 00:07:02.987 14:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:02.987 14:43:45 -- accel/accel.sh@20 -- # val= 00:07:02.987 14:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:02.987 14:43:45 -- accel/accel.sh@20 -- # val= 00:07:02.987 14:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:02.987 14:43:45 -- accel/accel.sh@20 -- # val=0xf 00:07:02.987 14:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:02.987 14:43:45 -- accel/accel.sh@20 -- # val= 00:07:02.987 14:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:02.987 14:43:45 -- accel/accel.sh@20 -- # val= 00:07:02.987 14:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:02.987 14:43:45 -- accel/accel.sh@20 -- # val=decompress 00:07:02.987 14:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.987 14:43:45 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:02.987 14:43:45 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.987 14:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:02.987 14:43:45 -- accel/accel.sh@20 -- # val= 00:07:02.987 14:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:02.987 14:43:45 -- accel/accel.sh@20 -- # val=software 00:07:02.987 14:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.987 14:43:45 -- accel/accel.sh@22 -- # accel_module=software 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:02.987 14:43:45 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:02.987 14:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:02.987 14:43:45 -- accel/accel.sh@20 -- # val=32 00:07:02.987 14:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:02.987 14:43:45 -- accel/accel.sh@20 -- # val=32 00:07:02.987 14:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:02.987 14:43:45 -- accel/accel.sh@20 -- # val=1 00:07:02.987 14:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:02.987 14:43:45 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.987 14:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:02.987 14:43:45 -- accel/accel.sh@20 -- # val=Yes 00:07:02.987 14:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:02.987 14:43:45 -- accel/accel.sh@20 -- # val= 00:07:02.987 14:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:02.987 14:43:45 -- accel/accel.sh@20 -- # val= 00:07:02.987 14:43:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # IFS=: 00:07:02.987 14:43:45 -- accel/accel.sh@19 -- # read -r var val 00:07:03.927 14:43:46 -- accel/accel.sh@20 -- # val= 00:07:03.927 14:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.927 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:03.927 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:03.927 14:43:46 -- accel/accel.sh@20 -- # val= 00:07:03.927 14:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.927 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:03.927 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:03.927 14:43:46 -- accel/accel.sh@20 -- # val= 00:07:03.927 14:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.927 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:03.927 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:03.927 14:43:46 -- accel/accel.sh@20 -- # val= 00:07:03.927 14:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.927 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:03.927 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:03.927 14:43:46 -- accel/accel.sh@20 -- # val= 00:07:03.927 14:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.927 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:03.927 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:03.927 14:43:46 -- accel/accel.sh@20 -- # val= 00:07:03.927 14:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.927 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:03.927 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:03.927 14:43:46 -- accel/accel.sh@20 -- # val= 00:07:03.927 14:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.927 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:03.927 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:03.927 14:43:46 -- accel/accel.sh@20 -- # val= 00:07:03.927 14:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.927 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:03.927 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:03.927 14:43:46 -- accel/accel.sh@20 -- # val= 00:07:03.927 14:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.927 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:03.927 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:03.927 14:43:46 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.927 14:43:46 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:03.927 14:43:46 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.927 00:07:03.927 real 0m1.310s 00:07:03.927 user 0m4.449s 00:07:03.927 sys 0m0.109s 00:07:03.927 14:43:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:03.927 14:43:46 -- common/autotest_common.sh@10 -- # set +x 00:07:03.927 ************************************ 00:07:03.927 END TEST accel_decomp_mcore 00:07:03.927 ************************************ 00:07:04.188 14:43:46 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.188 14:43:46 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:04.188 14:43:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.188 14:43:46 -- common/autotest_common.sh@10 -- # set +x 00:07:04.188 ************************************ 00:07:04.188 START TEST accel_decomp_full_mcore 00:07:04.188 ************************************ 00:07:04.188 14:43:46 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.188 14:43:46 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.188 14:43:46 -- accel/accel.sh@17 -- # local accel_module 00:07:04.188 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:04.188 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:04.188 14:43:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.188 14:43:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.188 14:43:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.188 14:43:46 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.188 14:43:46 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.188 14:43:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.188 14:43:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.188 14:43:46 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.188 14:43:46 -- accel/accel.sh@40 -- # local IFS=, 00:07:04.188 14:43:46 -- accel/accel.sh@41 -- # jq -r . 00:07:04.189 [2024-04-26 14:43:46.775865] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:04.189 [2024-04-26 14:43:46.775937] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid882171 ] 00:07:04.189 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.189 [2024-04-26 14:43:46.842009] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:04.449 [2024-04-26 14:43:46.916698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.449 [2024-04-26 14:43:46.916832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.449 [2024-04-26 14:43:46.916989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.449 [2024-04-26 14:43:46.917081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.449 14:43:46 -- accel/accel.sh@20 -- # val= 00:07:04.449 14:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 14:43:46 -- accel/accel.sh@20 -- # val= 00:07:04.449 14:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 14:43:46 -- accel/accel.sh@20 -- # val= 00:07:04.449 14:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 14:43:46 -- accel/accel.sh@20 -- # val=0xf 00:07:04.449 14:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 14:43:46 -- accel/accel.sh@20 -- # val= 00:07:04.449 14:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 14:43:46 -- accel/accel.sh@20 -- # val= 00:07:04.449 14:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 14:43:46 -- accel/accel.sh@20 -- # val=decompress 00:07:04.449 14:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 14:43:46 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 14:43:46 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:04.449 14:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 14:43:46 -- accel/accel.sh@20 -- # val= 00:07:04.449 14:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 14:43:46 -- accel/accel.sh@20 -- # val=software 00:07:04.449 14:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 14:43:46 -- accel/accel.sh@22 -- # accel_module=software 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 14:43:46 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:04.449 14:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 14:43:46 -- accel/accel.sh@20 -- # val=32 00:07:04.449 14:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 14:43:46 -- accel/accel.sh@20 -- # val=32 00:07:04.449 14:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 14:43:46 -- accel/accel.sh@20 -- # val=1 00:07:04.449 14:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 14:43:46 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.449 14:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 14:43:46 -- accel/accel.sh@20 -- # val=Yes 00:07:04.449 14:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 14:43:46 -- accel/accel.sh@20 -- # val= 00:07:04.449 14:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:04.449 14:43:46 -- accel/accel.sh@20 -- # val= 00:07:04.449 14:43:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # IFS=: 00:07:04.449 14:43:46 -- accel/accel.sh@19 -- # read -r var val 00:07:05.832 14:43:48 -- accel/accel.sh@20 -- # val= 00:07:05.832 14:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.832 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.832 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:05.832 14:43:48 -- accel/accel.sh@20 -- # val= 00:07:05.832 14:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.832 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.832 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:05.832 14:43:48 -- accel/accel.sh@20 -- # val= 00:07:05.832 14:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.832 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.832 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:05.832 14:43:48 -- accel/accel.sh@20 -- # val= 00:07:05.832 14:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.832 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.832 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:05.832 14:43:48 -- accel/accel.sh@20 -- # val= 00:07:05.832 14:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.832 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.832 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:05.832 14:43:48 -- accel/accel.sh@20 -- # val= 00:07:05.832 14:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.832 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.832 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:05.832 14:43:48 -- accel/accel.sh@20 -- # val= 00:07:05.832 14:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.832 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.832 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:05.832 14:43:48 -- accel/accel.sh@20 -- # val= 00:07:05.832 14:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.832 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.832 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:05.832 14:43:48 -- accel/accel.sh@20 -- # val= 00:07:05.832 14:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.832 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.832 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:05.832 14:43:48 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.832 14:43:48 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:05.832 14:43:48 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.832 00:07:05.832 real 0m1.322s 00:07:05.832 user 0m4.499s 00:07:05.832 sys 0m0.112s 00:07:05.832 14:43:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:05.832 14:43:48 -- common/autotest_common.sh@10 -- # set +x 00:07:05.832 ************************************ 00:07:05.832 END TEST accel_decomp_full_mcore 00:07:05.832 ************************************ 00:07:05.832 14:43:48 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:05.832 14:43:48 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:05.832 14:43:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.832 14:43:48 -- common/autotest_common.sh@10 -- # set +x 00:07:05.832 ************************************ 00:07:05.832 START TEST accel_decomp_mthread 00:07:05.832 ************************************ 00:07:05.832 14:43:48 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:05.832 14:43:48 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.832 14:43:48 -- accel/accel.sh@17 -- # local accel_module 00:07:05.832 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.832 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:05.832 14:43:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:05.832 14:43:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:05.832 14:43:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.832 14:43:48 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.832 14:43:48 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.832 14:43:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.832 14:43:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.832 14:43:48 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.832 14:43:48 -- accel/accel.sh@40 -- # local IFS=, 00:07:05.832 14:43:48 -- accel/accel.sh@41 -- # jq -r . 00:07:05.832 [2024-04-26 14:43:48.283595] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:05.832 [2024-04-26 14:43:48.283687] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid882422 ] 00:07:05.832 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.832 [2024-04-26 14:43:48.350547] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.832 [2024-04-26 14:43:48.422383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.832 14:43:48 -- accel/accel.sh@20 -- # val= 00:07:05.832 14:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.832 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.832 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:05.832 14:43:48 -- accel/accel.sh@20 -- # val= 00:07:05.832 14:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.832 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.832 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:05.832 14:43:48 -- accel/accel.sh@20 -- # val= 00:07:05.832 14:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.832 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.832 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:05.832 14:43:48 -- accel/accel.sh@20 -- # val=0x1 00:07:05.832 14:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 14:43:48 -- accel/accel.sh@20 -- # val= 00:07:05.833 14:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 14:43:48 -- accel/accel.sh@20 -- # val= 00:07:05.833 14:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 14:43:48 -- accel/accel.sh@20 -- # val=decompress 00:07:05.833 14:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 14:43:48 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 14:43:48 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.833 14:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 14:43:48 -- accel/accel.sh@20 -- # val= 00:07:05.833 14:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 14:43:48 -- accel/accel.sh@20 -- # val=software 00:07:05.833 14:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 14:43:48 -- accel/accel.sh@22 -- # accel_module=software 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 14:43:48 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:05.833 14:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 14:43:48 -- accel/accel.sh@20 -- # val=32 00:07:05.833 14:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 14:43:48 -- accel/accel.sh@20 -- # val=32 00:07:05.833 14:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 14:43:48 -- accel/accel.sh@20 -- # val=2 00:07:05.833 14:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 14:43:48 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.833 14:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 14:43:48 -- accel/accel.sh@20 -- # val=Yes 00:07:05.833 14:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 14:43:48 -- accel/accel.sh@20 -- # val= 00:07:05.833 14:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:05.833 14:43:48 -- accel/accel.sh@20 -- # val= 00:07:05.833 14:43:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # IFS=: 00:07:05.833 14:43:48 -- accel/accel.sh@19 -- # read -r var val 00:07:07.216 14:43:49 -- accel/accel.sh@20 -- # val= 00:07:07.216 14:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.216 14:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:07.216 14:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:07.216 14:43:49 -- accel/accel.sh@20 -- # val= 00:07:07.216 14:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.216 14:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:07.216 14:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:07.216 14:43:49 -- accel/accel.sh@20 -- # val= 00:07:07.216 14:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.216 14:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:07.216 14:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:07.216 14:43:49 -- accel/accel.sh@20 -- # val= 00:07:07.216 14:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.216 14:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:07.216 14:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:07.216 14:43:49 -- accel/accel.sh@20 -- # val= 00:07:07.216 14:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.216 14:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:07.216 14:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:07.216 14:43:49 -- accel/accel.sh@20 -- # val= 00:07:07.216 14:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.216 14:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:07.216 14:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:07.216 14:43:49 -- accel/accel.sh@20 -- # val= 00:07:07.216 14:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.216 14:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:07.216 14:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:07.216 14:43:49 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.216 14:43:49 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:07.216 14:43:49 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.216 00:07:07.216 real 0m1.305s 00:07:07.216 user 0m1.209s 00:07:07.216 sys 0m0.108s 00:07:07.216 14:43:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:07.216 14:43:49 -- common/autotest_common.sh@10 -- # set +x 00:07:07.216 ************************************ 00:07:07.216 END TEST accel_decomp_mthread 00:07:07.217 ************************************ 00:07:07.217 14:43:49 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:07.217 14:43:49 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:07.217 14:43:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.217 14:43:49 -- common/autotest_common.sh@10 -- # set +x 00:07:07.217 ************************************ 00:07:07.217 START TEST accel_deomp_full_mthread 00:07:07.217 ************************************ 00:07:07.217 14:43:49 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:07.217 14:43:49 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.217 14:43:49 -- accel/accel.sh@17 -- # local accel_module 00:07:07.217 14:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:07.217 14:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:07.217 14:43:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:07.217 14:43:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:07.217 14:43:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.217 14:43:49 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.217 14:43:49 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.217 14:43:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.217 14:43:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.217 14:43:49 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.217 14:43:49 -- accel/accel.sh@40 -- # local IFS=, 00:07:07.217 14:43:49 -- accel/accel.sh@41 -- # jq -r . 00:07:07.217 [2024-04-26 14:43:49.778143] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:07.217 [2024-04-26 14:43:49.778202] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid882724 ] 00:07:07.217 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.217 [2024-04-26 14:43:49.840047] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.477 [2024-04-26 14:43:49.902859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.477 14:43:49 -- accel/accel.sh@20 -- # val= 00:07:07.477 14:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.477 14:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:07.477 14:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:07.477 14:43:49 -- accel/accel.sh@20 -- # val= 00:07:07.477 14:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.477 14:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:07.477 14:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:07.477 14:43:49 -- accel/accel.sh@20 -- # val= 00:07:07.477 14:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.477 14:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:07.477 14:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:07.477 14:43:49 -- accel/accel.sh@20 -- # val=0x1 00:07:07.477 14:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.477 14:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:07.477 14:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:07.477 14:43:49 -- accel/accel.sh@20 -- # val= 00:07:07.477 14:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.477 14:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:07.477 14:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:07.477 14:43:49 -- accel/accel.sh@20 -- # val= 00:07:07.477 14:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.477 14:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:07.477 14:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:07.477 14:43:49 -- accel/accel.sh@20 -- # val=decompress 00:07:07.477 14:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.477 14:43:49 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:07.477 14:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:07.477 14:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:07.477 14:43:49 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:07.477 14:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.477 14:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:07.477 14:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:07.478 14:43:49 -- accel/accel.sh@20 -- # val= 00:07:07.478 14:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.478 14:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:07.478 14:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:07.478 14:43:49 -- accel/accel.sh@20 -- # val=software 00:07:07.478 14:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.478 14:43:49 -- accel/accel.sh@22 -- # accel_module=software 00:07:07.478 14:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:07.478 14:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:07.478 14:43:49 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:07.478 14:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.478 14:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:07.478 14:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:07.478 14:43:49 -- accel/accel.sh@20 -- # val=32 00:07:07.478 14:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.478 14:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:07.478 14:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:07.478 14:43:49 -- accel/accel.sh@20 -- # val=32 00:07:07.478 14:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.478 14:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:07.478 14:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:07.478 14:43:49 -- accel/accel.sh@20 -- # val=2 00:07:07.478 14:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.478 14:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:07.478 14:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:07.478 14:43:49 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.478 14:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.478 14:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:07.478 14:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:07.478 14:43:49 -- accel/accel.sh@20 -- # val=Yes 00:07:07.478 14:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.478 14:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:07.478 14:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:07.478 14:43:49 -- accel/accel.sh@20 -- # val= 00:07:07.478 14:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.478 14:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:07.478 14:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:07.478 14:43:49 -- accel/accel.sh@20 -- # val= 00:07:07.478 14:43:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.478 14:43:49 -- accel/accel.sh@19 -- # IFS=: 00:07:07.478 14:43:49 -- accel/accel.sh@19 -- # read -r var val 00:07:08.418 14:43:51 -- accel/accel.sh@20 -- # val= 00:07:08.418 14:43:51 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.418 14:43:51 -- accel/accel.sh@19 -- # IFS=: 00:07:08.418 14:43:51 -- accel/accel.sh@19 -- # read -r var val 00:07:08.418 14:43:51 -- accel/accel.sh@20 -- # val= 00:07:08.418 14:43:51 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.418 14:43:51 -- accel/accel.sh@19 -- # IFS=: 00:07:08.418 14:43:51 -- accel/accel.sh@19 -- # read -r var val 00:07:08.418 14:43:51 -- accel/accel.sh@20 -- # val= 00:07:08.418 14:43:51 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.418 14:43:51 -- accel/accel.sh@19 -- # IFS=: 00:07:08.418 14:43:51 -- accel/accel.sh@19 -- # read -r var val 00:07:08.418 14:43:51 -- accel/accel.sh@20 -- # val= 00:07:08.418 14:43:51 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.418 14:43:51 -- accel/accel.sh@19 -- # IFS=: 00:07:08.418 14:43:51 -- accel/accel.sh@19 -- # read -r var val 00:07:08.418 14:43:51 -- accel/accel.sh@20 -- # val= 00:07:08.418 14:43:51 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.418 14:43:51 -- accel/accel.sh@19 -- # IFS=: 00:07:08.418 14:43:51 -- accel/accel.sh@19 -- # read -r var val 00:07:08.418 14:43:51 -- accel/accel.sh@20 -- # val= 00:07:08.418 14:43:51 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.418 14:43:51 -- accel/accel.sh@19 -- # IFS=: 00:07:08.418 14:43:51 -- accel/accel.sh@19 -- # read -r var val 00:07:08.418 14:43:51 -- accel/accel.sh@20 -- # val= 00:07:08.418 14:43:51 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.418 14:43:51 -- accel/accel.sh@19 -- # IFS=: 00:07:08.418 14:43:51 -- accel/accel.sh@19 -- # read -r var val 00:07:08.418 14:43:51 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.418 14:43:51 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:08.418 14:43:51 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.419 00:07:08.419 real 0m1.319s 00:07:08.419 user 0m1.225s 00:07:08.419 sys 0m0.105s 00:07:08.419 14:43:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:08.419 14:43:51 -- common/autotest_common.sh@10 -- # set +x 00:07:08.419 ************************************ 00:07:08.419 END TEST accel_deomp_full_mthread 00:07:08.419 ************************************ 00:07:08.691 14:43:51 -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:08.691 14:43:51 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:08.691 14:43:51 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:08.691 14:43:51 -- accel/accel.sh@137 -- # build_accel_config 00:07:08.691 14:43:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.691 14:43:51 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.691 14:43:51 -- common/autotest_common.sh@10 -- # set +x 00:07:08.691 14:43:51 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.691 14:43:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.691 14:43:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.691 14:43:51 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.691 14:43:51 -- accel/accel.sh@40 -- # local IFS=, 00:07:08.691 14:43:51 -- accel/accel.sh@41 -- # jq -r . 00:07:08.691 ************************************ 00:07:08.691 START TEST accel_dif_functional_tests 00:07:08.691 ************************************ 00:07:08.691 14:43:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:08.691 [2024-04-26 14:43:51.297815] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:08.691 [2024-04-26 14:43:51.297878] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid883087 ] 00:07:08.691 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.951 [2024-04-26 14:43:51.362252] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.951 [2024-04-26 14:43:51.435582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.951 [2024-04-26 14:43:51.435701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.951 [2024-04-26 14:43:51.435704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.951 00:07:08.951 00:07:08.951 CUnit - A unit testing framework for C - Version 2.1-3 00:07:08.951 http://cunit.sourceforge.net/ 00:07:08.951 00:07:08.951 00:07:08.951 Suite: accel_dif 00:07:08.951 Test: verify: DIF generated, GUARD check ...passed 00:07:08.951 Test: verify: DIF generated, APPTAG check ...passed 00:07:08.951 Test: verify: DIF generated, REFTAG check ...passed 00:07:08.951 Test: verify: DIF not generated, GUARD check ...[2024-04-26 14:43:51.491678] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:08.951 [2024-04-26 14:43:51.491718] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:08.951 passed 00:07:08.951 Test: verify: DIF not generated, APPTAG check ...[2024-04-26 14:43:51.491753] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:08.951 [2024-04-26 14:43:51.491767] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:08.951 passed 00:07:08.951 Test: verify: DIF not generated, REFTAG check ...[2024-04-26 14:43:51.491783] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:08.951 [2024-04-26 14:43:51.491798] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:08.951 passed 00:07:08.951 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:08.951 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-26 14:43:51.491848] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:08.951 passed 00:07:08.951 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:08.951 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:08.951 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:08.951 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-26 14:43:51.491964] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:08.951 passed 00:07:08.951 Test: generate copy: DIF generated, GUARD check ...passed 00:07:08.951 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:08.951 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:08.951 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:08.951 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:08.951 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:08.951 Test: generate copy: iovecs-len validate ...[2024-04-26 14:43:51.492152] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:08.951 passed 00:07:08.951 Test: generate copy: buffer alignment validate ...passed 00:07:08.951 00:07:08.951 Run Summary: Type Total Ran Passed Failed Inactive 00:07:08.951 suites 1 1 n/a 0 0 00:07:08.951 tests 20 20 20 0 0 00:07:08.951 asserts 204 204 204 0 n/a 00:07:08.951 00:07:08.951 Elapsed time = 0.002 seconds 00:07:08.951 00:07:08.951 real 0m0.360s 00:07:08.951 user 0m0.464s 00:07:08.951 sys 0m0.118s 00:07:08.951 14:43:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:08.951 14:43:51 -- common/autotest_common.sh@10 -- # set +x 00:07:08.951 ************************************ 00:07:08.951 END TEST accel_dif_functional_tests 00:07:08.951 ************************************ 00:07:09.212 00:07:09.212 real 0m32.813s 00:07:09.212 user 0m34.742s 00:07:09.212 sys 0m5.286s 00:07:09.212 14:43:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:09.212 14:43:51 -- common/autotest_common.sh@10 -- # set +x 00:07:09.212 ************************************ 00:07:09.212 END TEST accel 00:07:09.212 ************************************ 00:07:09.212 14:43:51 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:09.212 14:43:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:09.212 14:43:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.212 14:43:51 -- common/autotest_common.sh@10 -- # set +x 00:07:09.212 ************************************ 00:07:09.212 START TEST accel_rpc 00:07:09.212 ************************************ 00:07:09.212 14:43:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:09.473 * Looking for test storage... 00:07:09.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:09.473 14:43:51 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:09.473 14:43:51 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=883372 00:07:09.473 14:43:51 -- accel/accel_rpc.sh@15 -- # waitforlisten 883372 00:07:09.473 14:43:51 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:09.473 14:43:51 -- common/autotest_common.sh@817 -- # '[' -z 883372 ']' 00:07:09.473 14:43:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.473 14:43:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:09.473 14:43:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.473 14:43:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:09.473 14:43:51 -- common/autotest_common.sh@10 -- # set +x 00:07:09.473 [2024-04-26 14:43:52.012195] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:09.473 [2024-04-26 14:43:52.012271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid883372 ] 00:07:09.474 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.474 [2024-04-26 14:43:52.079771] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.735 [2024-04-26 14:43:52.155345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.157 14:43:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:10.157 14:43:52 -- common/autotest_common.sh@850 -- # return 0 00:07:10.157 14:43:52 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:10.157 14:43:52 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:10.157 14:43:52 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:10.157 14:43:52 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:10.157 14:43:52 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:10.157 14:43:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:10.157 14:43:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.157 14:43:52 -- common/autotest_common.sh@10 -- # set +x 00:07:10.418 ************************************ 00:07:10.418 START TEST accel_assign_opcode 00:07:10.418 ************************************ 00:07:10.418 14:43:52 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:07:10.418 14:43:52 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:10.418 14:43:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:10.418 14:43:52 -- common/autotest_common.sh@10 -- # set +x 00:07:10.418 [2024-04-26 14:43:52.933572] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:10.418 14:43:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:10.418 14:43:52 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:10.418 14:43:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:10.418 14:43:52 -- common/autotest_common.sh@10 -- # set +x 00:07:10.418 [2024-04-26 14:43:52.945599] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:10.418 14:43:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:10.418 14:43:52 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:10.418 14:43:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:10.418 14:43:52 -- common/autotest_common.sh@10 -- # set +x 00:07:10.678 14:43:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:10.678 14:43:53 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:10.678 14:43:53 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:10.678 14:43:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:10.678 14:43:53 -- accel/accel_rpc.sh@42 -- # grep software 00:07:10.678 14:43:53 -- common/autotest_common.sh@10 -- # set +x 00:07:10.678 14:43:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:10.678 software 00:07:10.678 00:07:10.678 real 0m0.208s 00:07:10.678 user 0m0.048s 00:07:10.678 sys 0m0.013s 00:07:10.678 14:43:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:10.678 14:43:53 -- common/autotest_common.sh@10 -- # set +x 00:07:10.678 ************************************ 00:07:10.678 END TEST accel_assign_opcode 00:07:10.678 ************************************ 00:07:10.678 14:43:53 -- accel/accel_rpc.sh@55 -- # killprocess 883372 00:07:10.678 14:43:53 -- common/autotest_common.sh@936 -- # '[' -z 883372 ']' 00:07:10.678 14:43:53 -- common/autotest_common.sh@940 -- # kill -0 883372 00:07:10.678 14:43:53 -- common/autotest_common.sh@941 -- # uname 00:07:10.678 14:43:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:10.678 14:43:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 883372 00:07:10.678 14:43:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:10.678 14:43:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:10.678 14:43:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 883372' 00:07:10.678 killing process with pid 883372 00:07:10.678 14:43:53 -- common/autotest_common.sh@955 -- # kill 883372 00:07:10.678 14:43:53 -- common/autotest_common.sh@960 -- # wait 883372 00:07:10.939 00:07:10.939 real 0m1.596s 00:07:10.939 user 0m1.728s 00:07:10.939 sys 0m0.469s 00:07:10.939 14:43:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:10.939 14:43:53 -- common/autotest_common.sh@10 -- # set +x 00:07:10.939 ************************************ 00:07:10.939 END TEST accel_rpc 00:07:10.939 ************************************ 00:07:10.939 14:43:53 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:10.939 14:43:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:10.939 14:43:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.939 14:43:53 -- common/autotest_common.sh@10 -- # set +x 00:07:11.200 ************************************ 00:07:11.200 START TEST app_cmdline 00:07:11.200 ************************************ 00:07:11.200 14:43:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:11.200 * Looking for test storage... 00:07:11.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:11.200 14:43:53 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:11.200 14:43:53 -- app/cmdline.sh@17 -- # spdk_tgt_pid=883830 00:07:11.200 14:43:53 -- app/cmdline.sh@18 -- # waitforlisten 883830 00:07:11.200 14:43:53 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:11.200 14:43:53 -- common/autotest_common.sh@817 -- # '[' -z 883830 ']' 00:07:11.200 14:43:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.200 14:43:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:11.200 14:43:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.200 14:43:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:11.200 14:43:53 -- common/autotest_common.sh@10 -- # set +x 00:07:11.200 [2024-04-26 14:43:53.788612] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:11.200 [2024-04-26 14:43:53.788679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid883830 ] 00:07:11.200 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.200 [2024-04-26 14:43:53.853312] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.459 [2024-04-26 14:43:53.926365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.031 14:43:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:12.031 14:43:54 -- common/autotest_common.sh@850 -- # return 0 00:07:12.031 14:43:54 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:12.031 { 00:07:12.031 "version": "SPDK v24.05-pre git sha1 8571999d8", 00:07:12.031 "fields": { 00:07:12.031 "major": 24, 00:07:12.031 "minor": 5, 00:07:12.031 "patch": 0, 00:07:12.031 "suffix": "-pre", 00:07:12.031 "commit": "8571999d8" 00:07:12.031 } 00:07:12.031 } 00:07:12.031 14:43:54 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:12.031 14:43:54 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:12.031 14:43:54 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:12.031 14:43:54 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:12.031 14:43:54 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:12.031 14:43:54 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:12.031 14:43:54 -- app/cmdline.sh@26 -- # sort 00:07:12.031 14:43:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.031 14:43:54 -- common/autotest_common.sh@10 -- # set +x 00:07:12.031 14:43:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.291 14:43:54 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:12.291 14:43:54 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:12.291 14:43:54 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:12.291 14:43:54 -- common/autotest_common.sh@638 -- # local es=0 00:07:12.291 14:43:54 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:12.291 14:43:54 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:12.291 14:43:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:12.291 14:43:54 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:12.291 14:43:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:12.291 14:43:54 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:12.291 14:43:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:12.291 14:43:54 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:12.291 14:43:54 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:12.291 14:43:54 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:12.291 request: 00:07:12.291 { 00:07:12.291 "method": "env_dpdk_get_mem_stats", 00:07:12.291 "req_id": 1 00:07:12.291 } 00:07:12.291 Got JSON-RPC error response 00:07:12.291 response: 00:07:12.291 { 00:07:12.291 "code": -32601, 00:07:12.291 "message": "Method not found" 00:07:12.291 } 00:07:12.291 14:43:54 -- common/autotest_common.sh@641 -- # es=1 00:07:12.291 14:43:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:12.291 14:43:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:12.291 14:43:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:12.291 14:43:54 -- app/cmdline.sh@1 -- # killprocess 883830 00:07:12.291 14:43:54 -- common/autotest_common.sh@936 -- # '[' -z 883830 ']' 00:07:12.291 14:43:54 -- common/autotest_common.sh@940 -- # kill -0 883830 00:07:12.291 14:43:54 -- common/autotest_common.sh@941 -- # uname 00:07:12.291 14:43:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:12.291 14:43:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 883830 00:07:12.291 14:43:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:12.291 14:43:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:12.291 14:43:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 883830' 00:07:12.291 killing process with pid 883830 00:07:12.291 14:43:54 -- common/autotest_common.sh@955 -- # kill 883830 00:07:12.291 14:43:54 -- common/autotest_common.sh@960 -- # wait 883830 00:07:12.552 00:07:12.552 real 0m1.529s 00:07:12.552 user 0m1.793s 00:07:12.552 sys 0m0.421s 00:07:12.552 14:43:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:12.552 14:43:55 -- common/autotest_common.sh@10 -- # set +x 00:07:12.552 ************************************ 00:07:12.552 END TEST app_cmdline 00:07:12.552 ************************************ 00:07:12.552 14:43:55 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:12.552 14:43:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:12.552 14:43:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.552 14:43:55 -- common/autotest_common.sh@10 -- # set +x 00:07:12.813 ************************************ 00:07:12.813 START TEST version 00:07:12.813 ************************************ 00:07:12.813 14:43:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:12.813 * Looking for test storage... 00:07:12.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:12.813 14:43:55 -- app/version.sh@17 -- # get_header_version major 00:07:12.813 14:43:55 -- app/version.sh@14 -- # cut -f2 00:07:12.813 14:43:55 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:12.813 14:43:55 -- app/version.sh@14 -- # tr -d '"' 00:07:12.813 14:43:55 -- app/version.sh@17 -- # major=24 00:07:12.813 14:43:55 -- app/version.sh@18 -- # get_header_version minor 00:07:12.813 14:43:55 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:12.813 14:43:55 -- app/version.sh@14 -- # cut -f2 00:07:12.813 14:43:55 -- app/version.sh@14 -- # tr -d '"' 00:07:12.813 14:43:55 -- app/version.sh@18 -- # minor=5 00:07:12.813 14:43:55 -- app/version.sh@19 -- # get_header_version patch 00:07:12.813 14:43:55 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:12.813 14:43:55 -- app/version.sh@14 -- # cut -f2 00:07:12.813 14:43:55 -- app/version.sh@14 -- # tr -d '"' 00:07:12.813 14:43:55 -- app/version.sh@19 -- # patch=0 00:07:12.813 14:43:55 -- app/version.sh@20 -- # get_header_version suffix 00:07:13.074 14:43:55 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:13.074 14:43:55 -- app/version.sh@14 -- # cut -f2 00:07:13.074 14:43:55 -- app/version.sh@14 -- # tr -d '"' 00:07:13.074 14:43:55 -- app/version.sh@20 -- # suffix=-pre 00:07:13.074 14:43:55 -- app/version.sh@22 -- # version=24.5 00:07:13.074 14:43:55 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:13.074 14:43:55 -- app/version.sh@28 -- # version=24.5rc0 00:07:13.074 14:43:55 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:13.074 14:43:55 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:13.074 14:43:55 -- app/version.sh@30 -- # py_version=24.5rc0 00:07:13.074 14:43:55 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:07:13.074 00:07:13.074 real 0m0.176s 00:07:13.074 user 0m0.089s 00:07:13.074 sys 0m0.121s 00:07:13.074 14:43:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:13.074 14:43:55 -- common/autotest_common.sh@10 -- # set +x 00:07:13.074 ************************************ 00:07:13.074 END TEST version 00:07:13.074 ************************************ 00:07:13.074 14:43:55 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:07:13.074 14:43:55 -- spdk/autotest.sh@194 -- # uname -s 00:07:13.074 14:43:55 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:13.074 14:43:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:13.074 14:43:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:13.074 14:43:55 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:13.074 14:43:55 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:07:13.074 14:43:55 -- spdk/autotest.sh@258 -- # timing_exit lib 00:07:13.074 14:43:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:13.074 14:43:55 -- common/autotest_common.sh@10 -- # set +x 00:07:13.074 14:43:55 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:13.074 14:43:55 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:07:13.074 14:43:55 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:07:13.074 14:43:55 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:07:13.074 14:43:55 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:07:13.074 14:43:55 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:07:13.074 14:43:55 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:13.074 14:43:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:13.074 14:43:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.074 14:43:55 -- common/autotest_common.sh@10 -- # set +x 00:07:13.336 ************************************ 00:07:13.336 START TEST nvmf_tcp 00:07:13.336 ************************************ 00:07:13.336 14:43:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:13.336 * Looking for test storage... 00:07:13.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:13.336 14:43:55 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:13.336 14:43:55 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:13.336 14:43:55 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:13.336 14:43:55 -- nvmf/common.sh@7 -- # uname -s 00:07:13.336 14:43:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.336 14:43:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.336 14:43:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.336 14:43:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.336 14:43:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.336 14:43:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.336 14:43:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.336 14:43:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.336 14:43:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.336 14:43:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.336 14:43:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:13.336 14:43:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:13.337 14:43:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.337 14:43:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.337 14:43:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:13.337 14:43:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:13.337 14:43:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:13.337 14:43:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.337 14:43:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.337 14:43:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.337 14:43:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.337 14:43:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.337 14:43:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.337 14:43:55 -- paths/export.sh@5 -- # export PATH 00:07:13.337 14:43:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.337 14:43:55 -- nvmf/common.sh@47 -- # : 0 00:07:13.337 14:43:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:13.337 14:43:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:13.337 14:43:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:13.337 14:43:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.337 14:43:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.337 14:43:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:13.337 14:43:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:13.337 14:43:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:13.337 14:43:55 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:13.337 14:43:55 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:13.337 14:43:55 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:13.337 14:43:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:13.337 14:43:55 -- common/autotest_common.sh@10 -- # set +x 00:07:13.337 14:43:55 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:13.337 14:43:55 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:13.337 14:43:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:13.337 14:43:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.337 14:43:55 -- common/autotest_common.sh@10 -- # set +x 00:07:13.599 ************************************ 00:07:13.599 START TEST nvmf_example 00:07:13.599 ************************************ 00:07:13.599 14:43:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:13.599 * Looking for test storage... 00:07:13.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:13.599 14:43:56 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:13.599 14:43:56 -- nvmf/common.sh@7 -- # uname -s 00:07:13.599 14:43:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.599 14:43:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.599 14:43:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.599 14:43:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.599 14:43:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.599 14:43:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.599 14:43:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.599 14:43:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.599 14:43:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.599 14:43:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.599 14:43:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:13.599 14:43:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:13.599 14:43:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.599 14:43:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.599 14:43:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:13.599 14:43:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:13.599 14:43:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:13.599 14:43:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.599 14:43:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.599 14:43:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.599 14:43:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.599 14:43:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.599 14:43:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.599 14:43:56 -- paths/export.sh@5 -- # export PATH 00:07:13.599 14:43:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.599 14:43:56 -- nvmf/common.sh@47 -- # : 0 00:07:13.599 14:43:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:13.599 14:43:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:13.599 14:43:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:13.599 14:43:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.599 14:43:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.599 14:43:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:13.599 14:43:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:13.599 14:43:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:13.599 14:43:56 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:13.599 14:43:56 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:13.599 14:43:56 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:13.599 14:43:56 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:13.599 14:43:56 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:13.599 14:43:56 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:13.599 14:43:56 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:13.599 14:43:56 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:13.599 14:43:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:13.599 14:43:56 -- common/autotest_common.sh@10 -- # set +x 00:07:13.599 14:43:56 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:13.599 14:43:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:13.599 14:43:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:13.599 14:43:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:13.599 14:43:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:13.599 14:43:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:13.599 14:43:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.599 14:43:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:13.599 14:43:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.599 14:43:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:13.599 14:43:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:13.599 14:43:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:13.599 14:43:56 -- common/autotest_common.sh@10 -- # set +x 00:07:21.746 14:44:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:21.746 14:44:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:21.746 14:44:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:21.746 14:44:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:21.746 14:44:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:21.746 14:44:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:21.746 14:44:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:21.746 14:44:03 -- nvmf/common.sh@295 -- # net_devs=() 00:07:21.746 14:44:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:21.746 14:44:03 -- nvmf/common.sh@296 -- # e810=() 00:07:21.746 14:44:03 -- nvmf/common.sh@296 -- # local -ga e810 00:07:21.746 14:44:03 -- nvmf/common.sh@297 -- # x722=() 00:07:21.746 14:44:03 -- nvmf/common.sh@297 -- # local -ga x722 00:07:21.746 14:44:03 -- nvmf/common.sh@298 -- # mlx=() 00:07:21.746 14:44:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:21.746 14:44:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:21.746 14:44:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:21.746 14:44:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:21.746 14:44:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:21.746 14:44:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:21.746 14:44:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:21.746 14:44:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:21.746 14:44:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:21.746 14:44:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:21.746 14:44:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:21.746 14:44:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:21.746 14:44:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:21.746 14:44:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:21.746 14:44:03 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:21.746 14:44:03 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:21.746 14:44:03 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:21.746 14:44:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:21.746 14:44:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:21.746 14:44:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:21.746 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:21.746 14:44:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:21.746 14:44:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:21.746 14:44:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.746 14:44:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.746 14:44:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:21.746 14:44:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:21.746 14:44:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:21.746 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:21.746 14:44:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:21.746 14:44:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:21.746 14:44:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.746 14:44:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.746 14:44:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:21.746 14:44:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:21.746 14:44:03 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:21.746 14:44:03 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:21.746 14:44:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:21.746 14:44:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.746 14:44:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:21.746 14:44:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.746 14:44:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:21.746 Found net devices under 0000:31:00.0: cvl_0_0 00:07:21.746 14:44:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.746 14:44:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:21.746 14:44:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.746 14:44:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:21.746 14:44:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.746 14:44:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:21.746 Found net devices under 0000:31:00.1: cvl_0_1 00:07:21.746 14:44:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.746 14:44:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:21.746 14:44:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:21.746 14:44:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:21.746 14:44:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:21.746 14:44:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:21.746 14:44:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:21.746 14:44:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:21.746 14:44:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:21.746 14:44:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:21.746 14:44:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:21.746 14:44:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:21.746 14:44:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:21.746 14:44:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:21.746 14:44:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:21.746 14:44:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:21.746 14:44:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:21.746 14:44:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:21.746 14:44:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:21.746 14:44:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:21.746 14:44:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:21.746 14:44:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:21.746 14:44:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:21.746 14:44:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:21.746 14:44:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:21.746 14:44:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:21.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:21.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:07:21.746 00:07:21.746 --- 10.0.0.2 ping statistics --- 00:07:21.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.746 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:07:21.746 14:44:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:21.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:21.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:07:21.746 00:07:21.746 --- 10.0.0.1 ping statistics --- 00:07:21.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.746 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:07:21.746 14:44:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:21.746 14:44:03 -- nvmf/common.sh@411 -- # return 0 00:07:21.746 14:44:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:21.746 14:44:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:21.746 14:44:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:21.746 14:44:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:21.747 14:44:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:21.747 14:44:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:21.747 14:44:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:21.747 14:44:03 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:21.747 14:44:03 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:21.747 14:44:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:21.747 14:44:03 -- common/autotest_common.sh@10 -- # set +x 00:07:21.747 14:44:03 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:21.747 14:44:03 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:21.747 14:44:03 -- target/nvmf_example.sh@34 -- # nvmfpid=888095 00:07:21.747 14:44:03 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:21.747 14:44:03 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:21.747 14:44:03 -- target/nvmf_example.sh@36 -- # waitforlisten 888095 00:07:21.747 14:44:03 -- common/autotest_common.sh@817 -- # '[' -z 888095 ']' 00:07:21.747 14:44:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.747 14:44:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:21.747 14:44:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.747 14:44:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:21.747 14:44:03 -- common/autotest_common.sh@10 -- # set +x 00:07:21.747 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.747 14:44:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:21.747 14:44:04 -- common/autotest_common.sh@850 -- # return 0 00:07:21.747 14:44:04 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:21.747 14:44:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:21.747 14:44:04 -- common/autotest_common.sh@10 -- # set +x 00:07:21.747 14:44:04 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:21.747 14:44:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:21.747 14:44:04 -- common/autotest_common.sh@10 -- # set +x 00:07:21.747 14:44:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:21.747 14:44:04 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:21.747 14:44:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:21.747 14:44:04 -- common/autotest_common.sh@10 -- # set +x 00:07:21.747 14:44:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:21.747 14:44:04 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:21.747 14:44:04 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:21.747 14:44:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:21.747 14:44:04 -- common/autotest_common.sh@10 -- # set +x 00:07:21.747 14:44:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:21.747 14:44:04 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:21.747 14:44:04 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:21.747 14:44:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:21.747 14:44:04 -- common/autotest_common.sh@10 -- # set +x 00:07:21.747 14:44:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:21.747 14:44:04 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:21.747 14:44:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:21.747 14:44:04 -- common/autotest_common.sh@10 -- # set +x 00:07:21.747 14:44:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:21.747 14:44:04 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:21.747 14:44:04 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:22.062 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.086 Initializing NVMe Controllers 00:07:32.086 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:32.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:32.086 Initialization complete. Launching workers. 00:07:32.086 ======================================================== 00:07:32.086 Latency(us) 00:07:32.086 Device Information : IOPS MiB/s Average min max 00:07:32.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18809.90 73.48 3401.95 728.93 15717.13 00:07:32.086 ======================================================== 00:07:32.086 Total : 18809.90 73.48 3401.95 728.93 15717.13 00:07:32.086 00:07:32.086 14:44:14 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:32.086 14:44:14 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:32.086 14:44:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:32.086 14:44:14 -- nvmf/common.sh@117 -- # sync 00:07:32.086 14:44:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:32.086 14:44:14 -- nvmf/common.sh@120 -- # set +e 00:07:32.086 14:44:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:32.086 14:44:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:32.086 rmmod nvme_tcp 00:07:32.086 rmmod nvme_fabrics 00:07:32.086 rmmod nvme_keyring 00:07:32.086 14:44:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:32.086 14:44:14 -- nvmf/common.sh@124 -- # set -e 00:07:32.086 14:44:14 -- nvmf/common.sh@125 -- # return 0 00:07:32.086 14:44:14 -- nvmf/common.sh@478 -- # '[' -n 888095 ']' 00:07:32.086 14:44:14 -- nvmf/common.sh@479 -- # killprocess 888095 00:07:32.087 14:44:14 -- common/autotest_common.sh@936 -- # '[' -z 888095 ']' 00:07:32.087 14:44:14 -- common/autotest_common.sh@940 -- # kill -0 888095 00:07:32.087 14:44:14 -- common/autotest_common.sh@941 -- # uname 00:07:32.087 14:44:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:32.087 14:44:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 888095 00:07:32.087 14:44:14 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:07:32.087 14:44:14 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:07:32.087 14:44:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 888095' 00:07:32.087 killing process with pid 888095 00:07:32.087 14:44:14 -- common/autotest_common.sh@955 -- # kill 888095 00:07:32.087 14:44:14 -- common/autotest_common.sh@960 -- # wait 888095 00:07:32.347 nvmf threads initialize successfully 00:07:32.347 bdev subsystem init successfully 00:07:32.347 created a nvmf target service 00:07:32.347 create targets's poll groups done 00:07:32.347 all subsystems of target started 00:07:32.347 nvmf target is running 00:07:32.347 all subsystems of target stopped 00:07:32.347 destroy targets's poll groups done 00:07:32.347 destroyed the nvmf target service 00:07:32.347 bdev subsystem finish successfully 00:07:32.347 nvmf threads destroy successfully 00:07:32.347 14:44:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:32.347 14:44:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:32.347 14:44:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:32.347 14:44:14 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:32.347 14:44:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:32.347 14:44:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.347 14:44:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:32.347 14:44:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.261 14:44:16 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:34.261 14:44:16 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:34.261 14:44:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:34.261 14:44:16 -- common/autotest_common.sh@10 -- # set +x 00:07:34.261 00:07:34.261 real 0m20.857s 00:07:34.261 user 0m46.117s 00:07:34.261 sys 0m6.439s 00:07:34.261 14:44:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:34.261 14:44:16 -- common/autotest_common.sh@10 -- # set +x 00:07:34.261 ************************************ 00:07:34.261 END TEST nvmf_example 00:07:34.261 ************************************ 00:07:34.523 14:44:16 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:34.523 14:44:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:34.523 14:44:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.523 14:44:16 -- common/autotest_common.sh@10 -- # set +x 00:07:34.523 ************************************ 00:07:34.523 START TEST nvmf_filesystem 00:07:34.523 ************************************ 00:07:34.523 14:44:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:34.787 * Looking for test storage... 00:07:34.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.788 14:44:17 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:34.788 14:44:17 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:34.788 14:44:17 -- common/autotest_common.sh@34 -- # set -e 00:07:34.788 14:44:17 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:34.788 14:44:17 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:34.788 14:44:17 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:34.788 14:44:17 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:34.788 14:44:17 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:34.788 14:44:17 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:34.788 14:44:17 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:34.788 14:44:17 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:34.788 14:44:17 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:34.788 14:44:17 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:34.788 14:44:17 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:34.788 14:44:17 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:34.788 14:44:17 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:34.788 14:44:17 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:34.788 14:44:17 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:34.788 14:44:17 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:34.788 14:44:17 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:34.788 14:44:17 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:34.788 14:44:17 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:34.788 14:44:17 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:34.788 14:44:17 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:34.788 14:44:17 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:34.788 14:44:17 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:34.788 14:44:17 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:34.788 14:44:17 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:34.788 14:44:17 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:34.788 14:44:17 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:34.788 14:44:17 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:34.788 14:44:17 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:34.788 14:44:17 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:34.788 14:44:17 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:34.788 14:44:17 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:34.788 14:44:17 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:34.788 14:44:17 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:34.788 14:44:17 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:34.788 14:44:17 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:34.788 14:44:17 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:34.788 14:44:17 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:34.788 14:44:17 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:34.788 14:44:17 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:34.788 14:44:17 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:34.788 14:44:17 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:34.788 14:44:17 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:34.788 14:44:17 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:34.788 14:44:17 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:34.788 14:44:17 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:34.788 14:44:17 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:34.788 14:44:17 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:34.788 14:44:17 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:34.788 14:44:17 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:34.788 14:44:17 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:34.788 14:44:17 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:34.788 14:44:17 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:34.788 14:44:17 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:34.788 14:44:17 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:34.788 14:44:17 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:07:34.788 14:44:17 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:34.788 14:44:17 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:07:34.788 14:44:17 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:07:34.788 14:44:17 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:07:34.788 14:44:17 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:07:34.788 14:44:17 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:07:34.788 14:44:17 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:07:34.788 14:44:17 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:07:34.788 14:44:17 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:07:34.788 14:44:17 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:07:34.788 14:44:17 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:07:34.788 14:44:17 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:07:34.788 14:44:17 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:07:34.788 14:44:17 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:07:34.788 14:44:17 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:07:34.788 14:44:17 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:07:34.788 14:44:17 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:34.788 14:44:17 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:07:34.788 14:44:17 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:07:34.788 14:44:17 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:07:34.788 14:44:17 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:07:34.788 14:44:17 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:07:34.788 14:44:17 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:07:34.788 14:44:17 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:07:34.788 14:44:17 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:07:34.788 14:44:17 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:07:34.788 14:44:17 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:07:34.788 14:44:17 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:07:34.788 14:44:17 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:34.788 14:44:17 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:07:34.788 14:44:17 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:07:34.788 14:44:17 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:34.788 14:44:17 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:34.788 14:44:17 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:34.788 14:44:17 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:34.788 14:44:17 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:34.788 14:44:17 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:34.788 14:44:17 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:34.788 14:44:17 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:34.788 14:44:17 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:34.788 14:44:17 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:34.788 14:44:17 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:34.788 14:44:17 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:34.788 14:44:17 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:34.788 14:44:17 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:34.788 14:44:17 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:34.788 14:44:17 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:34.788 #define SPDK_CONFIG_H 00:07:34.788 #define SPDK_CONFIG_APPS 1 00:07:34.788 #define SPDK_CONFIG_ARCH native 00:07:34.788 #undef SPDK_CONFIG_ASAN 00:07:34.788 #undef SPDK_CONFIG_AVAHI 00:07:34.788 #undef SPDK_CONFIG_CET 00:07:34.788 #define SPDK_CONFIG_COVERAGE 1 00:07:34.788 #define SPDK_CONFIG_CROSS_PREFIX 00:07:34.788 #undef SPDK_CONFIG_CRYPTO 00:07:34.788 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:34.788 #undef SPDK_CONFIG_CUSTOMOCF 00:07:34.788 #undef SPDK_CONFIG_DAOS 00:07:34.788 #define SPDK_CONFIG_DAOS_DIR 00:07:34.788 #define SPDK_CONFIG_DEBUG 1 00:07:34.788 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:34.788 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:34.788 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:34.788 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:34.788 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:34.788 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:34.788 #define SPDK_CONFIG_EXAMPLES 1 00:07:34.788 #undef SPDK_CONFIG_FC 00:07:34.788 #define SPDK_CONFIG_FC_PATH 00:07:34.788 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:34.788 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:34.788 #undef SPDK_CONFIG_FUSE 00:07:34.788 #undef SPDK_CONFIG_FUZZER 00:07:34.788 #define SPDK_CONFIG_FUZZER_LIB 00:07:34.788 #undef SPDK_CONFIG_GOLANG 00:07:34.788 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:34.788 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:34.788 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:34.788 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:34.788 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:34.788 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:34.788 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:34.788 #define SPDK_CONFIG_IDXD 1 00:07:34.788 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:34.788 #undef SPDK_CONFIG_IPSEC_MB 00:07:34.788 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:34.788 #define SPDK_CONFIG_ISAL 1 00:07:34.788 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:34.788 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:34.788 #define SPDK_CONFIG_LIBDIR 00:07:34.789 #undef SPDK_CONFIG_LTO 00:07:34.789 #define SPDK_CONFIG_MAX_LCORES 00:07:34.789 #define SPDK_CONFIG_NVME_CUSE 1 00:07:34.789 #undef SPDK_CONFIG_OCF 00:07:34.789 #define SPDK_CONFIG_OCF_PATH 00:07:34.789 #define SPDK_CONFIG_OPENSSL_PATH 00:07:34.789 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:34.789 #define SPDK_CONFIG_PGO_DIR 00:07:34.789 #undef SPDK_CONFIG_PGO_USE 00:07:34.789 #define SPDK_CONFIG_PREFIX /usr/local 00:07:34.789 #undef SPDK_CONFIG_RAID5F 00:07:34.789 #undef SPDK_CONFIG_RBD 00:07:34.789 #define SPDK_CONFIG_RDMA 1 00:07:34.789 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:34.789 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:34.789 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:34.789 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:34.789 #define SPDK_CONFIG_SHARED 1 00:07:34.789 #undef SPDK_CONFIG_SMA 00:07:34.789 #define SPDK_CONFIG_TESTS 1 00:07:34.789 #undef SPDK_CONFIG_TSAN 00:07:34.789 #define SPDK_CONFIG_UBLK 1 00:07:34.789 #define SPDK_CONFIG_UBSAN 1 00:07:34.789 #undef SPDK_CONFIG_UNIT_TESTS 00:07:34.789 #undef SPDK_CONFIG_URING 00:07:34.789 #define SPDK_CONFIG_URING_PATH 00:07:34.789 #undef SPDK_CONFIG_URING_ZNS 00:07:34.789 #undef SPDK_CONFIG_USDT 00:07:34.789 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:34.789 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:34.789 #define SPDK_CONFIG_VFIO_USER 1 00:07:34.789 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:34.789 #define SPDK_CONFIG_VHOST 1 00:07:34.789 #define SPDK_CONFIG_VIRTIO 1 00:07:34.789 #undef SPDK_CONFIG_VTUNE 00:07:34.789 #define SPDK_CONFIG_VTUNE_DIR 00:07:34.789 #define SPDK_CONFIG_WERROR 1 00:07:34.789 #define SPDK_CONFIG_WPDK_DIR 00:07:34.789 #undef SPDK_CONFIG_XNVME 00:07:34.789 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:34.789 14:44:17 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:34.789 14:44:17 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:34.789 14:44:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.789 14:44:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.789 14:44:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.789 14:44:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.789 14:44:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.789 14:44:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.789 14:44:17 -- paths/export.sh@5 -- # export PATH 00:07:34.789 14:44:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.789 14:44:17 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:34.789 14:44:17 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:34.789 14:44:17 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:34.789 14:44:17 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:34.789 14:44:17 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:34.789 14:44:17 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:34.789 14:44:17 -- pm/common@67 -- # TEST_TAG=N/A 00:07:34.789 14:44:17 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:34.789 14:44:17 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:34.789 14:44:17 -- pm/common@71 -- # uname -s 00:07:34.789 14:44:17 -- pm/common@71 -- # PM_OS=Linux 00:07:34.789 14:44:17 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:34.789 14:44:17 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:07:34.789 14:44:17 -- pm/common@76 -- # [[ Linux == Linux ]] 00:07:34.789 14:44:17 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:07:34.789 14:44:17 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:07:34.789 14:44:17 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:34.789 14:44:17 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:34.789 14:44:17 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:07:34.789 14:44:17 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:07:34.789 14:44:17 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:34.789 14:44:17 -- common/autotest_common.sh@57 -- # : 0 00:07:34.789 14:44:17 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:34.789 14:44:17 -- common/autotest_common.sh@61 -- # : 0 00:07:34.789 14:44:17 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:34.789 14:44:17 -- common/autotest_common.sh@63 -- # : 0 00:07:34.789 14:44:17 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:34.789 14:44:17 -- common/autotest_common.sh@65 -- # : 1 00:07:34.789 14:44:17 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:34.789 14:44:17 -- common/autotest_common.sh@67 -- # : 0 00:07:34.789 14:44:17 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:34.789 14:44:17 -- common/autotest_common.sh@69 -- # : 00:07:34.789 14:44:17 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:34.789 14:44:17 -- common/autotest_common.sh@71 -- # : 0 00:07:34.789 14:44:17 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:34.789 14:44:17 -- common/autotest_common.sh@73 -- # : 0 00:07:34.789 14:44:17 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:34.789 14:44:17 -- common/autotest_common.sh@75 -- # : 0 00:07:34.789 14:44:17 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:34.789 14:44:17 -- common/autotest_common.sh@77 -- # : 0 00:07:34.789 14:44:17 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:34.789 14:44:17 -- common/autotest_common.sh@79 -- # : 0 00:07:34.789 14:44:17 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:34.789 14:44:17 -- common/autotest_common.sh@81 -- # : 0 00:07:34.789 14:44:17 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:34.789 14:44:17 -- common/autotest_common.sh@83 -- # : 0 00:07:34.789 14:44:17 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:34.789 14:44:17 -- common/autotest_common.sh@85 -- # : 1 00:07:34.789 14:44:17 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:34.789 14:44:17 -- common/autotest_common.sh@87 -- # : 0 00:07:34.789 14:44:17 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:34.789 14:44:17 -- common/autotest_common.sh@89 -- # : 0 00:07:34.789 14:44:17 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:34.789 14:44:17 -- common/autotest_common.sh@91 -- # : 1 00:07:34.789 14:44:17 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:34.789 14:44:17 -- common/autotest_common.sh@93 -- # : 1 00:07:34.789 14:44:17 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:34.789 14:44:17 -- common/autotest_common.sh@95 -- # : 0 00:07:34.789 14:44:17 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:34.789 14:44:17 -- common/autotest_common.sh@97 -- # : 0 00:07:34.789 14:44:17 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:34.789 14:44:17 -- common/autotest_common.sh@99 -- # : 0 00:07:34.789 14:44:17 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:34.789 14:44:17 -- common/autotest_common.sh@101 -- # : tcp 00:07:34.789 14:44:17 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:34.789 14:44:17 -- common/autotest_common.sh@103 -- # : 0 00:07:34.789 14:44:17 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:34.789 14:44:17 -- common/autotest_common.sh@105 -- # : 0 00:07:34.789 14:44:17 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:34.789 14:44:17 -- common/autotest_common.sh@107 -- # : 0 00:07:34.789 14:44:17 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:34.789 14:44:17 -- common/autotest_common.sh@109 -- # : 0 00:07:34.789 14:44:17 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:34.789 14:44:17 -- common/autotest_common.sh@111 -- # : 0 00:07:34.789 14:44:17 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:34.789 14:44:17 -- common/autotest_common.sh@113 -- # : 0 00:07:34.789 14:44:17 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:34.789 14:44:17 -- common/autotest_common.sh@115 -- # : 0 00:07:34.789 14:44:17 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:34.789 14:44:17 -- common/autotest_common.sh@117 -- # : 0 00:07:34.790 14:44:17 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:34.790 14:44:17 -- common/autotest_common.sh@119 -- # : 0 00:07:34.790 14:44:17 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:34.790 14:44:17 -- common/autotest_common.sh@121 -- # : 1 00:07:34.790 14:44:17 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:34.790 14:44:17 -- common/autotest_common.sh@123 -- # : 00:07:34.790 14:44:17 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:34.790 14:44:17 -- common/autotest_common.sh@125 -- # : 0 00:07:34.790 14:44:17 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:34.790 14:44:17 -- common/autotest_common.sh@127 -- # : 0 00:07:34.790 14:44:17 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:34.790 14:44:17 -- common/autotest_common.sh@129 -- # : 0 00:07:34.790 14:44:17 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:34.790 14:44:17 -- common/autotest_common.sh@131 -- # : 0 00:07:34.790 14:44:17 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:34.790 14:44:17 -- common/autotest_common.sh@133 -- # : 0 00:07:34.790 14:44:17 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:34.790 14:44:17 -- common/autotest_common.sh@135 -- # : 0 00:07:34.790 14:44:17 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:34.790 14:44:17 -- common/autotest_common.sh@137 -- # : 00:07:34.790 14:44:17 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:34.790 14:44:17 -- common/autotest_common.sh@139 -- # : true 00:07:34.790 14:44:17 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:34.790 14:44:17 -- common/autotest_common.sh@141 -- # : 0 00:07:34.790 14:44:17 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:34.790 14:44:17 -- common/autotest_common.sh@143 -- # : 0 00:07:34.790 14:44:17 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:34.790 14:44:17 -- common/autotest_common.sh@145 -- # : 0 00:07:34.790 14:44:17 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:34.790 14:44:17 -- common/autotest_common.sh@147 -- # : 0 00:07:34.790 14:44:17 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:34.790 14:44:17 -- common/autotest_common.sh@149 -- # : 0 00:07:34.790 14:44:17 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:34.790 14:44:17 -- common/autotest_common.sh@151 -- # : 0 00:07:34.790 14:44:17 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:34.790 14:44:17 -- common/autotest_common.sh@153 -- # : e810 00:07:34.790 14:44:17 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:34.790 14:44:17 -- common/autotest_common.sh@155 -- # : 0 00:07:34.790 14:44:17 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:34.790 14:44:17 -- common/autotest_common.sh@157 -- # : 0 00:07:34.790 14:44:17 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:34.790 14:44:17 -- common/autotest_common.sh@159 -- # : 0 00:07:34.790 14:44:17 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:34.790 14:44:17 -- common/autotest_common.sh@161 -- # : 0 00:07:34.790 14:44:17 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:34.790 14:44:17 -- common/autotest_common.sh@163 -- # : 0 00:07:34.790 14:44:17 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:34.790 14:44:17 -- common/autotest_common.sh@166 -- # : 00:07:34.790 14:44:17 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:34.790 14:44:17 -- common/autotest_common.sh@168 -- # : 0 00:07:34.790 14:44:17 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:34.790 14:44:17 -- common/autotest_common.sh@170 -- # : 0 00:07:34.790 14:44:17 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:34.790 14:44:17 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:34.790 14:44:17 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:34.790 14:44:17 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:34.790 14:44:17 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:34.790 14:44:17 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:34.790 14:44:17 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:34.790 14:44:17 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:34.790 14:44:17 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:34.790 14:44:17 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:34.790 14:44:17 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:34.790 14:44:17 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:34.790 14:44:17 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:34.790 14:44:17 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:34.790 14:44:17 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:34.790 14:44:17 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:34.790 14:44:17 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:34.790 14:44:17 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:34.790 14:44:17 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:34.790 14:44:17 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:34.790 14:44:17 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:34.790 14:44:17 -- common/autotest_common.sh@199 -- # cat 00:07:34.790 14:44:17 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:07:34.790 14:44:17 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:34.790 14:44:17 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:34.790 14:44:17 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:34.790 14:44:17 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:34.790 14:44:17 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:07:34.790 14:44:17 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:07:34.790 14:44:17 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:34.790 14:44:17 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:34.790 14:44:17 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:34.790 14:44:17 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:34.790 14:44:17 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:34.790 14:44:17 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:34.790 14:44:17 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:34.790 14:44:17 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:34.790 14:44:17 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:34.790 14:44:17 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:34.790 14:44:17 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:34.790 14:44:17 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:34.790 14:44:17 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:07:34.790 14:44:17 -- common/autotest_common.sh@252 -- # export valgrind= 00:07:34.790 14:44:17 -- common/autotest_common.sh@252 -- # valgrind= 00:07:34.790 14:44:17 -- common/autotest_common.sh@258 -- # uname -s 00:07:34.790 14:44:17 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:07:34.790 14:44:17 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:07:34.790 14:44:17 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:07:34.790 14:44:17 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:07:34.790 14:44:17 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:07:34.790 14:44:17 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:07:34.790 14:44:17 -- common/autotest_common.sh@268 -- # MAKE=make 00:07:34.790 14:44:17 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j144 00:07:34.790 14:44:17 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:07:34.790 14:44:17 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:07:34.790 14:44:17 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:07:34.790 14:44:17 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:07:34.790 14:44:17 -- common/autotest_common.sh@289 -- # for i in "$@" 00:07:34.790 14:44:17 -- common/autotest_common.sh@290 -- # case "$i" in 00:07:34.790 14:44:17 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:07:34.790 14:44:17 -- common/autotest_common.sh@307 -- # [[ -z 890908 ]] 00:07:34.790 14:44:17 -- common/autotest_common.sh@307 -- # kill -0 890908 00:07:34.791 14:44:17 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:07:34.791 14:44:17 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:07:34.791 14:44:17 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:07:34.791 14:44:17 -- common/autotest_common.sh@320 -- # local mount target_dir 00:07:34.791 14:44:17 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:07:34.791 14:44:17 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:07:34.791 14:44:17 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:07:34.791 14:44:17 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:07:34.791 14:44:17 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.koE8XJ 00:07:34.791 14:44:17 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:34.791 14:44:17 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:07:34.791 14:44:17 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:07:34.791 14:44:17 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.koE8XJ/tests/target /tmp/spdk.koE8XJ 00:07:34.791 14:44:17 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:07:34.791 14:44:17 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:34.791 14:44:17 -- common/autotest_common.sh@316 -- # df -T 00:07:34.791 14:44:17 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:07:34.791 14:44:17 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:07:34.791 14:44:17 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:07:34.791 14:44:17 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:07:34.791 14:44:17 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:07:34.791 14:44:17 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:07:34.791 14:44:17 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:34.791 14:44:17 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:07:34.791 14:44:17 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:07:34.791 14:44:17 -- common/autotest_common.sh@351 -- # avails["$mount"]=1052192768 00:07:34.791 14:44:17 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:07:34.791 14:44:17 -- common/autotest_common.sh@352 -- # uses["$mount"]=4232237056 00:07:34.791 14:44:17 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:34.791 14:44:17 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:07:34.791 14:44:17 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:07:34.791 14:44:17 -- common/autotest_common.sh@351 -- # avails["$mount"]=123016200192 00:07:34.791 14:44:17 -- common/autotest_common.sh@351 -- # sizes["$mount"]=129371000832 00:07:34.791 14:44:17 -- common/autotest_common.sh@352 -- # uses["$mount"]=6354800640 00:07:34.791 14:44:17 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:34.791 14:44:17 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:34.791 14:44:17 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:34.791 14:44:17 -- common/autotest_common.sh@351 -- # avails["$mount"]=64682885120 00:07:34.791 14:44:17 -- common/autotest_common.sh@351 -- # sizes["$mount"]=64685498368 00:07:34.791 14:44:17 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:07:34.791 14:44:17 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:34.791 14:44:17 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:34.791 14:44:17 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:34.791 14:44:17 -- common/autotest_common.sh@351 -- # avails["$mount"]=25864454144 00:07:34.791 14:44:17 -- common/autotest_common.sh@351 -- # sizes["$mount"]=25874202624 00:07:34.791 14:44:17 -- common/autotest_common.sh@352 -- # uses["$mount"]=9748480 00:07:34.791 14:44:17 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:34.791 14:44:17 -- common/autotest_common.sh@350 -- # mounts["$mount"]=efivarfs 00:07:34.791 14:44:17 -- common/autotest_common.sh@350 -- # fss["$mount"]=efivarfs 00:07:34.791 14:44:17 -- common/autotest_common.sh@351 -- # avails["$mount"]=189440 00:07:34.791 14:44:17 -- common/autotest_common.sh@351 -- # sizes["$mount"]=507904 00:07:34.791 14:44:17 -- common/autotest_common.sh@352 -- # uses["$mount"]=314368 00:07:34.791 14:44:17 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:34.791 14:44:17 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:34.791 14:44:17 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:34.791 14:44:17 -- common/autotest_common.sh@351 -- # avails["$mount"]=64684937216 00:07:34.791 14:44:17 -- common/autotest_common.sh@351 -- # sizes["$mount"]=64685502464 00:07:34.791 14:44:17 -- common/autotest_common.sh@352 -- # uses["$mount"]=565248 00:07:34.791 14:44:17 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:34.791 14:44:17 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:34.791 14:44:17 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:34.791 14:44:17 -- common/autotest_common.sh@351 -- # avails["$mount"]=12937093120 00:07:34.791 14:44:17 -- common/autotest_common.sh@351 -- # sizes["$mount"]=12937097216 00:07:34.791 14:44:17 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:07:34.791 14:44:17 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:34.791 14:44:17 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:07:34.791 * Looking for test storage... 00:07:34.791 14:44:17 -- common/autotest_common.sh@357 -- # local target_space new_size 00:07:34.791 14:44:17 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:07:34.791 14:44:17 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.791 14:44:17 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:34.791 14:44:17 -- common/autotest_common.sh@361 -- # mount=/ 00:07:34.791 14:44:17 -- common/autotest_common.sh@363 -- # target_space=123016200192 00:07:34.791 14:44:17 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:07:34.791 14:44:17 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:07:34.791 14:44:17 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:07:34.791 14:44:17 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:07:34.791 14:44:17 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:07:34.791 14:44:17 -- common/autotest_common.sh@370 -- # new_size=8569393152 00:07:34.791 14:44:17 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:34.791 14:44:17 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.791 14:44:17 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.791 14:44:17 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.791 14:44:17 -- common/autotest_common.sh@378 -- # return 0 00:07:34.791 14:44:17 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:07:34.791 14:44:17 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:07:34.791 14:44:17 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:34.791 14:44:17 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:34.791 14:44:17 -- common/autotest_common.sh@1673 -- # true 00:07:34.791 14:44:17 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:07:34.791 14:44:17 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:34.791 14:44:17 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:34.791 14:44:17 -- common/autotest_common.sh@27 -- # exec 00:07:34.791 14:44:17 -- common/autotest_common.sh@29 -- # exec 00:07:34.791 14:44:17 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:34.791 14:44:17 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:34.791 14:44:17 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:34.791 14:44:17 -- common/autotest_common.sh@18 -- # set -x 00:07:34.791 14:44:17 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:34.791 14:44:17 -- nvmf/common.sh@7 -- # uname -s 00:07:34.791 14:44:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.791 14:44:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.791 14:44:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.791 14:44:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.791 14:44:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.791 14:44:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.791 14:44:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.791 14:44:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.791 14:44:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.791 14:44:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.791 14:44:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:34.791 14:44:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:34.791 14:44:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.791 14:44:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.791 14:44:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:34.791 14:44:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.791 14:44:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:34.791 14:44:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.791 14:44:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.791 14:44:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.791 14:44:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.791 14:44:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.792 14:44:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.792 14:44:17 -- paths/export.sh@5 -- # export PATH 00:07:34.792 14:44:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.792 14:44:17 -- nvmf/common.sh@47 -- # : 0 00:07:34.792 14:44:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:34.792 14:44:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:34.792 14:44:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.792 14:44:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.792 14:44:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.792 14:44:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:34.792 14:44:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:34.792 14:44:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:34.792 14:44:17 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:34.792 14:44:17 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:34.792 14:44:17 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:34.792 14:44:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:34.792 14:44:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:34.792 14:44:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:34.792 14:44:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:34.792 14:44:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:34.792 14:44:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.792 14:44:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:34.792 14:44:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.792 14:44:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:34.792 14:44:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:34.792 14:44:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:34.792 14:44:17 -- common/autotest_common.sh@10 -- # set +x 00:07:42.935 14:44:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:42.935 14:44:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:42.935 14:44:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:42.935 14:44:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:42.935 14:44:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:42.935 14:44:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:42.935 14:44:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:42.935 14:44:24 -- nvmf/common.sh@295 -- # net_devs=() 00:07:42.935 14:44:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:42.935 14:44:24 -- nvmf/common.sh@296 -- # e810=() 00:07:42.935 14:44:24 -- nvmf/common.sh@296 -- # local -ga e810 00:07:42.935 14:44:24 -- nvmf/common.sh@297 -- # x722=() 00:07:42.935 14:44:24 -- nvmf/common.sh@297 -- # local -ga x722 00:07:42.935 14:44:24 -- nvmf/common.sh@298 -- # mlx=() 00:07:42.935 14:44:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:42.935 14:44:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:42.935 14:44:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:42.935 14:44:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:42.935 14:44:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:42.935 14:44:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:42.935 14:44:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:42.935 14:44:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:42.935 14:44:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:42.935 14:44:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:42.935 14:44:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:42.935 14:44:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:42.935 14:44:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:42.935 14:44:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:42.935 14:44:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:42.935 14:44:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:42.935 14:44:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:42.935 14:44:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:42.935 14:44:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.935 14:44:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:42.935 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:42.935 14:44:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:42.935 14:44:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:42.935 14:44:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.935 14:44:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.935 14:44:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:42.935 14:44:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.935 14:44:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:42.935 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:42.935 14:44:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:42.935 14:44:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:42.935 14:44:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.935 14:44:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.935 14:44:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:42.935 14:44:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:42.935 14:44:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:42.935 14:44:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:42.935 14:44:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.935 14:44:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.935 14:44:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:42.935 14:44:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.935 14:44:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:42.935 Found net devices under 0000:31:00.0: cvl_0_0 00:07:42.935 14:44:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.935 14:44:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.935 14:44:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.935 14:44:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:42.935 14:44:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.935 14:44:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:42.935 Found net devices under 0000:31:00.1: cvl_0_1 00:07:42.935 14:44:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.935 14:44:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:42.935 14:44:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:42.935 14:44:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:42.935 14:44:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:42.935 14:44:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:42.935 14:44:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:42.935 14:44:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:42.935 14:44:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:42.935 14:44:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:42.935 14:44:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:42.935 14:44:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:42.935 14:44:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:42.935 14:44:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:42.935 14:44:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:42.935 14:44:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:42.935 14:44:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:42.935 14:44:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:42.935 14:44:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:42.935 14:44:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:42.935 14:44:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:42.935 14:44:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:42.935 14:44:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:42.935 14:44:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:42.935 14:44:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:42.935 14:44:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:42.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:42.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:07:42.935 00:07:42.935 --- 10.0.0.2 ping statistics --- 00:07:42.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.935 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:07:42.935 14:44:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:42.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:42.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:07:42.935 00:07:42.935 --- 10.0.0.1 ping statistics --- 00:07:42.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.935 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:07:42.935 14:44:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:42.935 14:44:24 -- nvmf/common.sh@411 -- # return 0 00:07:42.935 14:44:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:42.935 14:44:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:42.935 14:44:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:42.935 14:44:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:42.935 14:44:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:42.935 14:44:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:42.935 14:44:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:42.935 14:44:24 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:42.935 14:44:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:42.935 14:44:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:42.935 14:44:24 -- common/autotest_common.sh@10 -- # set +x 00:07:42.935 ************************************ 00:07:42.935 START TEST nvmf_filesystem_no_in_capsule 00:07:42.935 ************************************ 00:07:42.935 14:44:24 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:07:42.935 14:44:24 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:42.935 14:44:24 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:42.935 14:44:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:42.935 14:44:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:42.935 14:44:24 -- common/autotest_common.sh@10 -- # set +x 00:07:42.935 14:44:24 -- nvmf/common.sh@470 -- # nvmfpid=894919 00:07:42.935 14:44:24 -- nvmf/common.sh@471 -- # waitforlisten 894919 00:07:42.935 14:44:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:42.935 14:44:24 -- common/autotest_common.sh@817 -- # '[' -z 894919 ']' 00:07:42.935 14:44:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.935 14:44:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:42.935 14:44:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.935 14:44:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:42.935 14:44:24 -- common/autotest_common.sh@10 -- # set +x 00:07:42.935 [2024-04-26 14:44:25.021686] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:42.935 [2024-04-26 14:44:25.021734] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.935 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.935 [2024-04-26 14:44:25.093715] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:42.936 [2024-04-26 14:44:25.162219] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.936 [2024-04-26 14:44:25.162259] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.936 [2024-04-26 14:44:25.162268] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:42.936 [2024-04-26 14:44:25.162276] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:42.936 [2024-04-26 14:44:25.162283] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.936 [2024-04-26 14:44:25.162330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.936 [2024-04-26 14:44:25.162466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.936 [2024-04-26 14:44:25.162623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.936 [2024-04-26 14:44:25.162623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.197 14:44:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:43.197 14:44:25 -- common/autotest_common.sh@850 -- # return 0 00:07:43.197 14:44:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:43.197 14:44:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:43.197 14:44:25 -- common/autotest_common.sh@10 -- # set +x 00:07:43.197 14:44:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.197 14:44:25 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:43.197 14:44:25 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:43.197 14:44:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.197 14:44:25 -- common/autotest_common.sh@10 -- # set +x 00:07:43.197 [2024-04-26 14:44:25.828384] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.197 14:44:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.197 14:44:25 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:43.197 14:44:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.197 14:44:25 -- common/autotest_common.sh@10 -- # set +x 00:07:43.458 Malloc1 00:07:43.458 14:44:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.458 14:44:25 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:43.458 14:44:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.458 14:44:25 -- common/autotest_common.sh@10 -- # set +x 00:07:43.458 14:44:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.458 14:44:25 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:43.458 14:44:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.458 14:44:25 -- common/autotest_common.sh@10 -- # set +x 00:07:43.458 14:44:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.458 14:44:25 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:43.458 14:44:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.458 14:44:25 -- common/autotest_common.sh@10 -- # set +x 00:07:43.458 [2024-04-26 14:44:25.956239] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.458 14:44:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.458 14:44:25 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:43.458 14:44:25 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:43.458 14:44:25 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:43.458 14:44:25 -- common/autotest_common.sh@1366 -- # local bs 00:07:43.458 14:44:25 -- common/autotest_common.sh@1367 -- # local nb 00:07:43.458 14:44:25 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:43.458 14:44:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.458 14:44:25 -- common/autotest_common.sh@10 -- # set +x 00:07:43.458 14:44:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.458 14:44:25 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:43.458 { 00:07:43.458 "name": "Malloc1", 00:07:43.458 "aliases": [ 00:07:43.458 "80fb7726-7eec-452f-847a-3c29777f2612" 00:07:43.458 ], 00:07:43.458 "product_name": "Malloc disk", 00:07:43.458 "block_size": 512, 00:07:43.458 "num_blocks": 1048576, 00:07:43.458 "uuid": "80fb7726-7eec-452f-847a-3c29777f2612", 00:07:43.458 "assigned_rate_limits": { 00:07:43.458 "rw_ios_per_sec": 0, 00:07:43.458 "rw_mbytes_per_sec": 0, 00:07:43.458 "r_mbytes_per_sec": 0, 00:07:43.458 "w_mbytes_per_sec": 0 00:07:43.458 }, 00:07:43.458 "claimed": true, 00:07:43.458 "claim_type": "exclusive_write", 00:07:43.458 "zoned": false, 00:07:43.458 "supported_io_types": { 00:07:43.458 "read": true, 00:07:43.458 "write": true, 00:07:43.458 "unmap": true, 00:07:43.458 "write_zeroes": true, 00:07:43.458 "flush": true, 00:07:43.458 "reset": true, 00:07:43.458 "compare": false, 00:07:43.458 "compare_and_write": false, 00:07:43.458 "abort": true, 00:07:43.458 "nvme_admin": false, 00:07:43.458 "nvme_io": false 00:07:43.458 }, 00:07:43.458 "memory_domains": [ 00:07:43.458 { 00:07:43.458 "dma_device_id": "system", 00:07:43.458 "dma_device_type": 1 00:07:43.458 }, 00:07:43.458 { 00:07:43.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.458 "dma_device_type": 2 00:07:43.458 } 00:07:43.458 ], 00:07:43.458 "driver_specific": {} 00:07:43.458 } 00:07:43.458 ]' 00:07:43.458 14:44:25 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:43.458 14:44:26 -- common/autotest_common.sh@1369 -- # bs=512 00:07:43.458 14:44:26 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:43.458 14:44:26 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:43.458 14:44:26 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:43.458 14:44:26 -- common/autotest_common.sh@1374 -- # echo 512 00:07:43.458 14:44:26 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:43.458 14:44:26 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:45.373 14:44:27 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:45.373 14:44:27 -- common/autotest_common.sh@1184 -- # local i=0 00:07:45.373 14:44:27 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:45.373 14:44:27 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:45.373 14:44:27 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:47.288 14:44:29 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:47.288 14:44:29 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:47.288 14:44:29 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:47.288 14:44:29 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:47.288 14:44:29 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:47.288 14:44:29 -- common/autotest_common.sh@1194 -- # return 0 00:07:47.288 14:44:29 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:47.288 14:44:29 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:47.288 14:44:29 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:47.288 14:44:29 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:47.288 14:44:29 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:47.288 14:44:29 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:47.288 14:44:29 -- setup/common.sh@80 -- # echo 536870912 00:07:47.288 14:44:29 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:47.288 14:44:29 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:47.288 14:44:29 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:47.288 14:44:29 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:47.548 14:44:30 -- target/filesystem.sh@69 -- # partprobe 00:07:48.119 14:44:30 -- target/filesystem.sh@70 -- # sleep 1 00:07:49.062 14:44:31 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:49.062 14:44:31 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:49.062 14:44:31 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:49.062 14:44:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.062 14:44:31 -- common/autotest_common.sh@10 -- # set +x 00:07:49.323 ************************************ 00:07:49.323 START TEST filesystem_ext4 00:07:49.323 ************************************ 00:07:49.323 14:44:31 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:49.323 14:44:31 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:49.323 14:44:31 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:49.323 14:44:31 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:49.323 14:44:31 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:49.323 14:44:31 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:49.323 14:44:31 -- common/autotest_common.sh@914 -- # local i=0 00:07:49.323 14:44:31 -- common/autotest_common.sh@915 -- # local force 00:07:49.323 14:44:31 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:49.323 14:44:31 -- common/autotest_common.sh@918 -- # force=-F 00:07:49.323 14:44:31 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:49.323 mke2fs 1.46.5 (30-Dec-2021) 00:07:49.323 Discarding device blocks: 0/522240 done 00:07:49.323 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:49.323 Filesystem UUID: e3b9bc31-171a-4193-8082-ffae6159613c 00:07:49.323 Superblock backups stored on blocks: 00:07:49.323 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:49.323 00:07:49.324 Allocating group tables: 0/64 done 00:07:49.324 Writing inode tables: 0/64 done 00:07:49.324 Creating journal (8192 blocks): done 00:07:49.583 Writing superblocks and filesystem accounting information: 0/64 done 00:07:49.583 00:07:49.583 14:44:31 -- common/autotest_common.sh@931 -- # return 0 00:07:49.583 14:44:31 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:49.844 14:44:32 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:49.844 14:44:32 -- target/filesystem.sh@25 -- # sync 00:07:49.844 14:44:32 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:49.844 14:44:32 -- target/filesystem.sh@27 -- # sync 00:07:49.844 14:44:32 -- target/filesystem.sh@29 -- # i=0 00:07:49.844 14:44:32 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:49.844 14:44:32 -- target/filesystem.sh@37 -- # kill -0 894919 00:07:49.844 14:44:32 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:49.844 14:44:32 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:49.844 14:44:32 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:49.844 14:44:32 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:49.844 00:07:49.844 real 0m0.643s 00:07:49.844 user 0m0.030s 00:07:49.844 sys 0m0.068s 00:07:49.844 14:44:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:49.844 14:44:32 -- common/autotest_common.sh@10 -- # set +x 00:07:49.844 ************************************ 00:07:49.844 END TEST filesystem_ext4 00:07:49.844 ************************************ 00:07:49.844 14:44:32 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:49.844 14:44:32 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:49.844 14:44:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.844 14:44:32 -- common/autotest_common.sh@10 -- # set +x 00:07:50.104 ************************************ 00:07:50.104 START TEST filesystem_btrfs 00:07:50.104 ************************************ 00:07:50.104 14:44:32 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:50.104 14:44:32 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:50.104 14:44:32 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:50.104 14:44:32 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:50.104 14:44:32 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:50.104 14:44:32 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:50.104 14:44:32 -- common/autotest_common.sh@914 -- # local i=0 00:07:50.104 14:44:32 -- common/autotest_common.sh@915 -- # local force 00:07:50.104 14:44:32 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:50.104 14:44:32 -- common/autotest_common.sh@920 -- # force=-f 00:07:50.104 14:44:32 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:50.365 btrfs-progs v6.6.2 00:07:50.365 See https://btrfs.readthedocs.io for more information. 00:07:50.365 00:07:50.365 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:50.365 NOTE: several default settings have changed in version 5.15, please make sure 00:07:50.365 this does not affect your deployments: 00:07:50.365 - DUP for metadata (-m dup) 00:07:50.365 - enabled no-holes (-O no-holes) 00:07:50.365 - enabled free-space-tree (-R free-space-tree) 00:07:50.365 00:07:50.365 Label: (null) 00:07:50.365 UUID: 065235e7-e8ca-4451-ae18-e677235eb17c 00:07:50.365 Node size: 16384 00:07:50.365 Sector size: 4096 00:07:50.365 Filesystem size: 510.00MiB 00:07:50.365 Block group profiles: 00:07:50.365 Data: single 8.00MiB 00:07:50.365 Metadata: DUP 32.00MiB 00:07:50.365 System: DUP 8.00MiB 00:07:50.365 SSD detected: yes 00:07:50.365 Zoned device: no 00:07:50.365 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:50.365 Runtime features: free-space-tree 00:07:50.365 Checksum: crc32c 00:07:50.365 Number of devices: 1 00:07:50.365 Devices: 00:07:50.365 ID SIZE PATH 00:07:50.365 1 510.00MiB /dev/nvme0n1p1 00:07:50.365 00:07:50.365 14:44:32 -- common/autotest_common.sh@931 -- # return 0 00:07:50.365 14:44:32 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:50.626 14:44:33 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:50.626 14:44:33 -- target/filesystem.sh@25 -- # sync 00:07:50.626 14:44:33 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:50.626 14:44:33 -- target/filesystem.sh@27 -- # sync 00:07:50.626 14:44:33 -- target/filesystem.sh@29 -- # i=0 00:07:50.626 14:44:33 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:50.626 14:44:33 -- target/filesystem.sh@37 -- # kill -0 894919 00:07:50.626 14:44:33 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:50.626 14:44:33 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:50.626 14:44:33 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:50.626 14:44:33 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:50.626 00:07:50.626 real 0m0.576s 00:07:50.626 user 0m0.027s 00:07:50.626 sys 0m0.131s 00:07:50.626 14:44:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:50.626 14:44:33 -- common/autotest_common.sh@10 -- # set +x 00:07:50.626 ************************************ 00:07:50.626 END TEST filesystem_btrfs 00:07:50.626 ************************************ 00:07:50.626 14:44:33 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:50.626 14:44:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:50.626 14:44:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.626 14:44:33 -- common/autotest_common.sh@10 -- # set +x 00:07:50.887 ************************************ 00:07:50.887 START TEST filesystem_xfs 00:07:50.887 ************************************ 00:07:50.887 14:44:33 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:50.887 14:44:33 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:50.887 14:44:33 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:50.887 14:44:33 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:50.887 14:44:33 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:50.887 14:44:33 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:50.887 14:44:33 -- common/autotest_common.sh@914 -- # local i=0 00:07:50.887 14:44:33 -- common/autotest_common.sh@915 -- # local force 00:07:50.887 14:44:33 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:50.887 14:44:33 -- common/autotest_common.sh@920 -- # force=-f 00:07:50.887 14:44:33 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:50.887 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:50.887 = sectsz=512 attr=2, projid32bit=1 00:07:50.887 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:50.887 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:50.887 data = bsize=4096 blocks=130560, imaxpct=25 00:07:50.887 = sunit=0 swidth=0 blks 00:07:50.887 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:50.887 log =internal log bsize=4096 blocks=16384, version=2 00:07:50.887 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:50.887 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:52.273 Discarding blocks...Done. 00:07:52.273 14:44:34 -- common/autotest_common.sh@931 -- # return 0 00:07:52.273 14:44:34 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:54.187 14:44:36 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:54.187 14:44:36 -- target/filesystem.sh@25 -- # sync 00:07:54.188 14:44:36 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:54.188 14:44:36 -- target/filesystem.sh@27 -- # sync 00:07:54.188 14:44:36 -- target/filesystem.sh@29 -- # i=0 00:07:54.188 14:44:36 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:54.188 14:44:36 -- target/filesystem.sh@37 -- # kill -0 894919 00:07:54.188 14:44:36 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:54.188 14:44:36 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:54.188 14:44:36 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:54.188 14:44:36 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:54.188 00:07:54.188 real 0m3.060s 00:07:54.188 user 0m0.027s 00:07:54.188 sys 0m0.075s 00:07:54.188 14:44:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:54.188 14:44:36 -- common/autotest_common.sh@10 -- # set +x 00:07:54.188 ************************************ 00:07:54.188 END TEST filesystem_xfs 00:07:54.188 ************************************ 00:07:54.188 14:44:36 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:54.188 14:44:36 -- target/filesystem.sh@93 -- # sync 00:07:54.188 14:44:36 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:54.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:54.188 14:44:36 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:54.188 14:44:36 -- common/autotest_common.sh@1205 -- # local i=0 00:07:54.188 14:44:36 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:54.188 14:44:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:54.188 14:44:36 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:54.188 14:44:36 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:54.188 14:44:36 -- common/autotest_common.sh@1217 -- # return 0 00:07:54.188 14:44:36 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:54.188 14:44:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.188 14:44:36 -- common/autotest_common.sh@10 -- # set +x 00:07:54.188 14:44:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.188 14:44:36 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:54.188 14:44:36 -- target/filesystem.sh@101 -- # killprocess 894919 00:07:54.188 14:44:36 -- common/autotest_common.sh@936 -- # '[' -z 894919 ']' 00:07:54.188 14:44:36 -- common/autotest_common.sh@940 -- # kill -0 894919 00:07:54.188 14:44:36 -- common/autotest_common.sh@941 -- # uname 00:07:54.188 14:44:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:54.188 14:44:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 894919 00:07:54.188 14:44:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:54.188 14:44:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:54.188 14:44:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 894919' 00:07:54.188 killing process with pid 894919 00:07:54.188 14:44:36 -- common/autotest_common.sh@955 -- # kill 894919 00:07:54.188 14:44:36 -- common/autotest_common.sh@960 -- # wait 894919 00:07:54.449 14:44:36 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:54.449 00:07:54.449 real 0m12.037s 00:07:54.449 user 0m47.532s 00:07:54.449 sys 0m1.321s 00:07:54.449 14:44:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:54.449 14:44:36 -- common/autotest_common.sh@10 -- # set +x 00:07:54.449 ************************************ 00:07:54.449 END TEST nvmf_filesystem_no_in_capsule 00:07:54.449 ************************************ 00:07:54.449 14:44:37 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:54.449 14:44:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:54.449 14:44:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.449 14:44:37 -- common/autotest_common.sh@10 -- # set +x 00:07:54.709 ************************************ 00:07:54.709 START TEST nvmf_filesystem_in_capsule 00:07:54.709 ************************************ 00:07:54.709 14:44:37 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:07:54.709 14:44:37 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:54.709 14:44:37 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:54.709 14:44:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:54.709 14:44:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:54.709 14:44:37 -- common/autotest_common.sh@10 -- # set +x 00:07:54.709 14:44:37 -- nvmf/common.sh@470 -- # nvmfpid=897531 00:07:54.709 14:44:37 -- nvmf/common.sh@471 -- # waitforlisten 897531 00:07:54.709 14:44:37 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:54.709 14:44:37 -- common/autotest_common.sh@817 -- # '[' -z 897531 ']' 00:07:54.709 14:44:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.709 14:44:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:54.709 14:44:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.709 14:44:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:54.709 14:44:37 -- common/autotest_common.sh@10 -- # set +x 00:07:54.709 [2024-04-26 14:44:37.252059] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:54.709 [2024-04-26 14:44:37.252104] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.709 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.709 [2024-04-26 14:44:37.318230] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:54.970 [2024-04-26 14:44:37.381855] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.970 [2024-04-26 14:44:37.381895] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.970 [2024-04-26 14:44:37.381904] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:54.970 [2024-04-26 14:44:37.381916] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:54.970 [2024-04-26 14:44:37.381923] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.970 [2024-04-26 14:44:37.382118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.970 [2024-04-26 14:44:37.382231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.970 [2024-04-26 14:44:37.382386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.970 [2024-04-26 14:44:37.382386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:55.542 14:44:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:55.542 14:44:38 -- common/autotest_common.sh@850 -- # return 0 00:07:55.542 14:44:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:55.542 14:44:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:55.542 14:44:38 -- common/autotest_common.sh@10 -- # set +x 00:07:55.542 14:44:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.542 14:44:38 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:55.542 14:44:38 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:55.542 14:44:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.542 14:44:38 -- common/autotest_common.sh@10 -- # set +x 00:07:55.542 [2024-04-26 14:44:38.071501] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.542 14:44:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.542 14:44:38 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:55.542 14:44:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.542 14:44:38 -- common/autotest_common.sh@10 -- # set +x 00:07:55.542 Malloc1 00:07:55.542 14:44:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.542 14:44:38 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:55.542 14:44:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.542 14:44:38 -- common/autotest_common.sh@10 -- # set +x 00:07:55.542 14:44:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.542 14:44:38 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:55.542 14:44:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.542 14:44:38 -- common/autotest_common.sh@10 -- # set +x 00:07:55.542 14:44:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.542 14:44:38 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:55.542 14:44:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.542 14:44:38 -- common/autotest_common.sh@10 -- # set +x 00:07:55.542 [2024-04-26 14:44:38.199214] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.542 14:44:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.542 14:44:38 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:55.542 14:44:38 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:55.542 14:44:38 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:55.542 14:44:38 -- common/autotest_common.sh@1366 -- # local bs 00:07:55.542 14:44:38 -- common/autotest_common.sh@1367 -- # local nb 00:07:55.803 14:44:38 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:55.803 14:44:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:55.803 14:44:38 -- common/autotest_common.sh@10 -- # set +x 00:07:55.803 14:44:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.803 14:44:38 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:55.803 { 00:07:55.803 "name": "Malloc1", 00:07:55.803 "aliases": [ 00:07:55.803 "c9e54a33-3762-4ae2-99a6-a5fe2d5e75e0" 00:07:55.803 ], 00:07:55.803 "product_name": "Malloc disk", 00:07:55.803 "block_size": 512, 00:07:55.803 "num_blocks": 1048576, 00:07:55.803 "uuid": "c9e54a33-3762-4ae2-99a6-a5fe2d5e75e0", 00:07:55.803 "assigned_rate_limits": { 00:07:55.803 "rw_ios_per_sec": 0, 00:07:55.803 "rw_mbytes_per_sec": 0, 00:07:55.803 "r_mbytes_per_sec": 0, 00:07:55.803 "w_mbytes_per_sec": 0 00:07:55.803 }, 00:07:55.803 "claimed": true, 00:07:55.803 "claim_type": "exclusive_write", 00:07:55.803 "zoned": false, 00:07:55.803 "supported_io_types": { 00:07:55.803 "read": true, 00:07:55.803 "write": true, 00:07:55.803 "unmap": true, 00:07:55.803 "write_zeroes": true, 00:07:55.803 "flush": true, 00:07:55.803 "reset": true, 00:07:55.803 "compare": false, 00:07:55.803 "compare_and_write": false, 00:07:55.803 "abort": true, 00:07:55.803 "nvme_admin": false, 00:07:55.803 "nvme_io": false 00:07:55.803 }, 00:07:55.803 "memory_domains": [ 00:07:55.803 { 00:07:55.803 "dma_device_id": "system", 00:07:55.803 "dma_device_type": 1 00:07:55.803 }, 00:07:55.803 { 00:07:55.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.803 "dma_device_type": 2 00:07:55.803 } 00:07:55.803 ], 00:07:55.803 "driver_specific": {} 00:07:55.803 } 00:07:55.803 ]' 00:07:55.803 14:44:38 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:55.803 14:44:38 -- common/autotest_common.sh@1369 -- # bs=512 00:07:55.803 14:44:38 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:55.803 14:44:38 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:55.803 14:44:38 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:55.803 14:44:38 -- common/autotest_common.sh@1374 -- # echo 512 00:07:55.803 14:44:38 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:55.803 14:44:38 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:57.187 14:44:39 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:57.187 14:44:39 -- common/autotest_common.sh@1184 -- # local i=0 00:07:57.187 14:44:39 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:57.187 14:44:39 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:57.187 14:44:39 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:59.735 14:44:41 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:59.735 14:44:41 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:59.735 14:44:41 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:59.735 14:44:41 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:59.735 14:44:41 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:59.735 14:44:41 -- common/autotest_common.sh@1194 -- # return 0 00:07:59.735 14:44:41 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:59.735 14:44:41 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:59.735 14:44:41 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:59.735 14:44:41 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:59.735 14:44:41 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:59.735 14:44:41 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:59.735 14:44:41 -- setup/common.sh@80 -- # echo 536870912 00:07:59.735 14:44:41 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:59.735 14:44:41 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:59.735 14:44:41 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:59.735 14:44:41 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:59.735 14:44:42 -- target/filesystem.sh@69 -- # partprobe 00:07:59.735 14:44:42 -- target/filesystem.sh@70 -- # sleep 1 00:08:00.675 14:44:43 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:00.675 14:44:43 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:00.675 14:44:43 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:00.675 14:44:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:00.675 14:44:43 -- common/autotest_common.sh@10 -- # set +x 00:08:00.675 ************************************ 00:08:00.675 START TEST filesystem_in_capsule_ext4 00:08:00.675 ************************************ 00:08:00.675 14:44:43 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:00.675 14:44:43 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:00.675 14:44:43 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:00.675 14:44:43 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:00.675 14:44:43 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:00.675 14:44:43 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:00.675 14:44:43 -- common/autotest_common.sh@914 -- # local i=0 00:08:00.675 14:44:43 -- common/autotest_common.sh@915 -- # local force 00:08:00.675 14:44:43 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:00.675 14:44:43 -- common/autotest_common.sh@918 -- # force=-F 00:08:00.675 14:44:43 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:00.676 mke2fs 1.46.5 (30-Dec-2021) 00:08:00.936 Discarding device blocks: 0/522240 done 00:08:00.936 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:00.936 Filesystem UUID: 058507b4-719e-45f8-b5a2-0ac455627604 00:08:00.936 Superblock backups stored on blocks: 00:08:00.936 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:00.936 00:08:00.936 Allocating group tables: 0/64 done 00:08:00.936 Writing inode tables: 0/64 done 00:08:00.936 Creating journal (8192 blocks): done 00:08:01.768 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:08:01.768 00:08:02.030 14:44:44 -- common/autotest_common.sh@931 -- # return 0 00:08:02.030 14:44:44 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:02.030 14:44:44 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:02.030 14:44:44 -- target/filesystem.sh@25 -- # sync 00:08:02.292 14:44:44 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:02.292 14:44:44 -- target/filesystem.sh@27 -- # sync 00:08:02.292 14:44:44 -- target/filesystem.sh@29 -- # i=0 00:08:02.292 14:44:44 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:02.292 14:44:44 -- target/filesystem.sh@37 -- # kill -0 897531 00:08:02.292 14:44:44 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:02.292 14:44:44 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:02.292 14:44:44 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:02.292 14:44:44 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:02.292 00:08:02.292 real 0m1.444s 00:08:02.292 user 0m0.030s 00:08:02.292 sys 0m0.068s 00:08:02.292 14:44:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:02.292 14:44:44 -- common/autotest_common.sh@10 -- # set +x 00:08:02.292 ************************************ 00:08:02.292 END TEST filesystem_in_capsule_ext4 00:08:02.292 ************************************ 00:08:02.292 14:44:44 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:02.292 14:44:44 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:02.292 14:44:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.292 14:44:44 -- common/autotest_common.sh@10 -- # set +x 00:08:02.292 ************************************ 00:08:02.292 START TEST filesystem_in_capsule_btrfs 00:08:02.292 ************************************ 00:08:02.292 14:44:44 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:02.292 14:44:44 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:02.292 14:44:44 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:02.292 14:44:44 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:02.292 14:44:44 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:02.292 14:44:44 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:02.292 14:44:44 -- common/autotest_common.sh@914 -- # local i=0 00:08:02.292 14:44:44 -- common/autotest_common.sh@915 -- # local force 00:08:02.292 14:44:44 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:02.292 14:44:44 -- common/autotest_common.sh@920 -- # force=-f 00:08:02.292 14:44:44 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:02.553 btrfs-progs v6.6.2 00:08:02.553 See https://btrfs.readthedocs.io for more information. 00:08:02.553 00:08:02.553 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:02.553 NOTE: several default settings have changed in version 5.15, please make sure 00:08:02.553 this does not affect your deployments: 00:08:02.553 - DUP for metadata (-m dup) 00:08:02.553 - enabled no-holes (-O no-holes) 00:08:02.553 - enabled free-space-tree (-R free-space-tree) 00:08:02.553 00:08:02.553 Label: (null) 00:08:02.553 UUID: 705fa3a6-9bba-4440-89cd-eb2b6a412f77 00:08:02.553 Node size: 16384 00:08:02.553 Sector size: 4096 00:08:02.553 Filesystem size: 510.00MiB 00:08:02.553 Block group profiles: 00:08:02.553 Data: single 8.00MiB 00:08:02.553 Metadata: DUP 32.00MiB 00:08:02.553 System: DUP 8.00MiB 00:08:02.553 SSD detected: yes 00:08:02.553 Zoned device: no 00:08:02.553 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:02.553 Runtime features: free-space-tree 00:08:02.553 Checksum: crc32c 00:08:02.553 Number of devices: 1 00:08:02.553 Devices: 00:08:02.553 ID SIZE PATH 00:08:02.553 1 510.00MiB /dev/nvme0n1p1 00:08:02.553 00:08:02.553 14:44:45 -- common/autotest_common.sh@931 -- # return 0 00:08:02.553 14:44:45 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:02.814 14:44:45 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:02.814 14:44:45 -- target/filesystem.sh@25 -- # sync 00:08:02.814 14:44:45 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:02.814 14:44:45 -- target/filesystem.sh@27 -- # sync 00:08:02.814 14:44:45 -- target/filesystem.sh@29 -- # i=0 00:08:02.814 14:44:45 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:02.814 14:44:45 -- target/filesystem.sh@37 -- # kill -0 897531 00:08:02.814 14:44:45 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:02.814 14:44:45 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:02.814 14:44:45 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:02.814 14:44:45 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:02.814 00:08:02.814 real 0m0.514s 00:08:02.814 user 0m0.027s 00:08:02.814 sys 0m0.134s 00:08:02.814 14:44:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:02.814 14:44:45 -- common/autotest_common.sh@10 -- # set +x 00:08:02.814 ************************************ 00:08:02.814 END TEST filesystem_in_capsule_btrfs 00:08:02.814 ************************************ 00:08:03.075 14:44:45 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:03.075 14:44:45 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:03.075 14:44:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:03.075 14:44:45 -- common/autotest_common.sh@10 -- # set +x 00:08:03.075 ************************************ 00:08:03.075 START TEST filesystem_in_capsule_xfs 00:08:03.075 ************************************ 00:08:03.075 14:44:45 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:08:03.075 14:44:45 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:03.075 14:44:45 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:03.075 14:44:45 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:03.075 14:44:45 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:03.075 14:44:45 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:03.075 14:44:45 -- common/autotest_common.sh@914 -- # local i=0 00:08:03.075 14:44:45 -- common/autotest_common.sh@915 -- # local force 00:08:03.075 14:44:45 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:03.075 14:44:45 -- common/autotest_common.sh@920 -- # force=-f 00:08:03.075 14:44:45 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:03.075 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:03.075 = sectsz=512 attr=2, projid32bit=1 00:08:03.075 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:03.075 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:03.075 data = bsize=4096 blocks=130560, imaxpct=25 00:08:03.075 = sunit=0 swidth=0 blks 00:08:03.075 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:03.075 log =internal log bsize=4096 blocks=16384, version=2 00:08:03.075 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:03.075 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:04.073 Discarding blocks...Done. 00:08:04.073 14:44:46 -- common/autotest_common.sh@931 -- # return 0 00:08:04.073 14:44:46 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:06.053 14:44:48 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:06.053 14:44:48 -- target/filesystem.sh@25 -- # sync 00:08:06.053 14:44:48 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:06.053 14:44:48 -- target/filesystem.sh@27 -- # sync 00:08:06.053 14:44:48 -- target/filesystem.sh@29 -- # i=0 00:08:06.053 14:44:48 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:06.053 14:44:48 -- target/filesystem.sh@37 -- # kill -0 897531 00:08:06.053 14:44:48 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:06.053 14:44:48 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:06.053 14:44:48 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:06.053 14:44:48 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:06.053 00:08:06.053 real 0m3.040s 00:08:06.053 user 0m0.029s 00:08:06.053 sys 0m0.072s 00:08:06.053 14:44:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:06.053 14:44:48 -- common/autotest_common.sh@10 -- # set +x 00:08:06.053 ************************************ 00:08:06.053 END TEST filesystem_in_capsule_xfs 00:08:06.053 ************************************ 00:08:06.312 14:44:48 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:06.573 14:44:49 -- target/filesystem.sh@93 -- # sync 00:08:06.573 14:44:49 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:06.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:06.573 14:44:49 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:06.573 14:44:49 -- common/autotest_common.sh@1205 -- # local i=0 00:08:06.573 14:44:49 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:06.573 14:44:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:06.573 14:44:49 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:06.573 14:44:49 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:06.573 14:44:49 -- common/autotest_common.sh@1217 -- # return 0 00:08:06.573 14:44:49 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:06.573 14:44:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:06.573 14:44:49 -- common/autotest_common.sh@10 -- # set +x 00:08:06.573 14:44:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:06.573 14:44:49 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:06.573 14:44:49 -- target/filesystem.sh@101 -- # killprocess 897531 00:08:06.573 14:44:49 -- common/autotest_common.sh@936 -- # '[' -z 897531 ']' 00:08:06.573 14:44:49 -- common/autotest_common.sh@940 -- # kill -0 897531 00:08:06.573 14:44:49 -- common/autotest_common.sh@941 -- # uname 00:08:06.573 14:44:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:06.573 14:44:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 897531 00:08:06.833 14:44:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:06.833 14:44:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:06.833 14:44:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 897531' 00:08:06.833 killing process with pid 897531 00:08:06.833 14:44:49 -- common/autotest_common.sh@955 -- # kill 897531 00:08:06.833 14:44:49 -- common/autotest_common.sh@960 -- # wait 897531 00:08:06.833 14:44:49 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:06.833 00:08:06.833 real 0m12.288s 00:08:06.833 user 0m48.515s 00:08:06.833 sys 0m1.382s 00:08:06.833 14:44:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:06.833 14:44:49 -- common/autotest_common.sh@10 -- # set +x 00:08:06.833 ************************************ 00:08:06.833 END TEST nvmf_filesystem_in_capsule 00:08:06.833 ************************************ 00:08:07.095 14:44:49 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:07.095 14:44:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:07.095 14:44:49 -- nvmf/common.sh@117 -- # sync 00:08:07.095 14:44:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:07.095 14:44:49 -- nvmf/common.sh@120 -- # set +e 00:08:07.095 14:44:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:07.095 14:44:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:07.095 rmmod nvme_tcp 00:08:07.095 rmmod nvme_fabrics 00:08:07.095 rmmod nvme_keyring 00:08:07.095 14:44:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:07.095 14:44:49 -- nvmf/common.sh@124 -- # set -e 00:08:07.095 14:44:49 -- nvmf/common.sh@125 -- # return 0 00:08:07.095 14:44:49 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:08:07.095 14:44:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:07.095 14:44:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:07.095 14:44:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:07.095 14:44:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:07.095 14:44:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:07.095 14:44:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.095 14:44:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.095 14:44:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.008 14:44:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:09.008 00:08:09.008 real 0m34.544s 00:08:09.008 user 1m38.386s 00:08:09.008 sys 0m8.457s 00:08:09.008 14:44:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:09.008 14:44:51 -- common/autotest_common.sh@10 -- # set +x 00:08:09.008 ************************************ 00:08:09.008 END TEST nvmf_filesystem 00:08:09.008 ************************************ 00:08:09.270 14:44:51 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:09.270 14:44:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:09.270 14:44:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:09.270 14:44:51 -- common/autotest_common.sh@10 -- # set +x 00:08:09.270 ************************************ 00:08:09.270 START TEST nvmf_discovery 00:08:09.270 ************************************ 00:08:09.270 14:44:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:09.531 * Looking for test storage... 00:08:09.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.531 14:44:51 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.531 14:44:51 -- nvmf/common.sh@7 -- # uname -s 00:08:09.531 14:44:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.531 14:44:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.531 14:44:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.531 14:44:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.531 14:44:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.531 14:44:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.531 14:44:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.531 14:44:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.531 14:44:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.531 14:44:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.532 14:44:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:09.532 14:44:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:09.532 14:44:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.532 14:44:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.532 14:44:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:09.532 14:44:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.532 14:44:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:09.532 14:44:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.532 14:44:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.532 14:44:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.532 14:44:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.532 14:44:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.532 14:44:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.532 14:44:51 -- paths/export.sh@5 -- # export PATH 00:08:09.532 14:44:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.532 14:44:51 -- nvmf/common.sh@47 -- # : 0 00:08:09.532 14:44:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:09.532 14:44:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:09.532 14:44:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.532 14:44:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.532 14:44:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.532 14:44:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:09.532 14:44:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:09.532 14:44:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:09.532 14:44:51 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:09.532 14:44:51 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:09.532 14:44:51 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:09.532 14:44:51 -- target/discovery.sh@15 -- # hash nvme 00:08:09.532 14:44:51 -- target/discovery.sh@20 -- # nvmftestinit 00:08:09.532 14:44:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:09.532 14:44:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.532 14:44:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:09.532 14:44:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:09.532 14:44:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:09.532 14:44:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.532 14:44:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.532 14:44:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.532 14:44:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:09.532 14:44:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:09.532 14:44:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:09.532 14:44:51 -- common/autotest_common.sh@10 -- # set +x 00:08:17.675 14:44:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:17.675 14:44:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:17.675 14:44:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:17.675 14:44:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:17.675 14:44:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:17.675 14:44:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:17.675 14:44:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:17.675 14:44:58 -- nvmf/common.sh@295 -- # net_devs=() 00:08:17.675 14:44:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:17.675 14:44:58 -- nvmf/common.sh@296 -- # e810=() 00:08:17.675 14:44:58 -- nvmf/common.sh@296 -- # local -ga e810 00:08:17.675 14:44:58 -- nvmf/common.sh@297 -- # x722=() 00:08:17.675 14:44:58 -- nvmf/common.sh@297 -- # local -ga x722 00:08:17.675 14:44:58 -- nvmf/common.sh@298 -- # mlx=() 00:08:17.675 14:44:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:17.675 14:44:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:17.675 14:44:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:17.675 14:44:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:17.675 14:44:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:17.676 14:44:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:17.676 14:44:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:17.676 14:44:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:17.676 14:44:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:17.676 14:44:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:17.676 14:44:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:17.676 14:44:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:17.676 14:44:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:17.676 14:44:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:17.676 14:44:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:17.676 14:44:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:17.676 14:44:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:17.676 14:44:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:17.676 14:44:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:17.676 14:44:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:17.676 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:17.676 14:44:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:17.676 14:44:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:17.676 14:44:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.676 14:44:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.676 14:44:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:17.676 14:44:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:17.676 14:44:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:17.676 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:17.676 14:44:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:17.676 14:44:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:17.676 14:44:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.676 14:44:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.676 14:44:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:17.676 14:44:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:17.676 14:44:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:17.676 14:44:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:17.676 14:44:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:17.676 14:44:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.676 14:44:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:17.676 14:44:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.676 14:44:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:17.676 Found net devices under 0000:31:00.0: cvl_0_0 00:08:17.676 14:44:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.676 14:44:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:17.676 14:44:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.676 14:44:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:17.676 14:44:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.676 14:44:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:17.676 Found net devices under 0000:31:00.1: cvl_0_1 00:08:17.676 14:44:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.676 14:44:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:17.676 14:44:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:17.676 14:44:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:17.676 14:44:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:17.676 14:44:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:17.676 14:44:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.676 14:44:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:17.676 14:44:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:17.676 14:44:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:17.676 14:44:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:17.676 14:44:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:17.676 14:44:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:17.676 14:44:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:17.676 14:44:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.676 14:44:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:17.676 14:44:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:17.676 14:44:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:17.676 14:44:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:17.676 14:44:59 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:17.676 14:44:59 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:17.676 14:44:59 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:17.676 14:44:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:17.676 14:44:59 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:17.676 14:44:59 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:17.676 14:44:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:17.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.515 ms 00:08:17.676 00:08:17.676 --- 10.0.0.2 ping statistics --- 00:08:17.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.676 rtt min/avg/max/mdev = 0.515/0.515/0.515/0.000 ms 00:08:17.676 14:44:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:17.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:08:17.676 00:08:17.676 --- 10.0.0.1 ping statistics --- 00:08:17.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.676 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:08:17.676 14:44:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.676 14:44:59 -- nvmf/common.sh@411 -- # return 0 00:08:17.676 14:44:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:17.676 14:44:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.676 14:44:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:17.676 14:44:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:17.676 14:44:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.676 14:44:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:17.676 14:44:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:17.676 14:44:59 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:17.676 14:44:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:17.676 14:44:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:17.676 14:44:59 -- common/autotest_common.sh@10 -- # set +x 00:08:17.676 14:44:59 -- nvmf/common.sh@470 -- # nvmfpid=904340 00:08:17.676 14:44:59 -- nvmf/common.sh@471 -- # waitforlisten 904340 00:08:17.676 14:44:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:17.676 14:44:59 -- common/autotest_common.sh@817 -- # '[' -z 904340 ']' 00:08:17.676 14:44:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.676 14:44:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:17.676 14:44:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.676 14:44:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:17.676 14:44:59 -- common/autotest_common.sh@10 -- # set +x 00:08:17.676 [2024-04-26 14:44:59.262627] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:08:17.676 [2024-04-26 14:44:59.262691] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.676 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.676 [2024-04-26 14:44:59.336288] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.676 [2024-04-26 14:44:59.411859] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.676 [2024-04-26 14:44:59.411898] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.676 [2024-04-26 14:44:59.411906] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.676 [2024-04-26 14:44:59.411914] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.676 [2024-04-26 14:44:59.411921] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.676 [2024-04-26 14:44:59.411989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.676 [2024-04-26 14:44:59.412092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.676 [2024-04-26 14:44:59.412246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.676 [2024-04-26 14:44:59.412248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.676 14:45:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:17.676 14:45:00 -- common/autotest_common.sh@850 -- # return 0 00:08:17.676 14:45:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:17.676 14:45:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:17.676 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.676 14:45:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.676 14:45:00 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:17.676 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.676 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.676 [2024-04-26 14:45:00.084442] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.676 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.676 14:45:00 -- target/discovery.sh@26 -- # seq 1 4 00:08:17.676 14:45:00 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:17.676 14:45:00 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:17.676 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.676 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.676 Null1 00:08:17.676 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.676 14:45:00 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:17.676 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.676 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.676 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.676 14:45:00 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:17.676 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.676 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.676 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.676 14:45:00 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:17.677 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.677 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.677 [2024-04-26 14:45:00.141963] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.677 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.677 14:45:00 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:17.677 14:45:00 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:17.677 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.677 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.677 Null2 00:08:17.677 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.677 14:45:00 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:17.677 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.677 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.677 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.677 14:45:00 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:17.677 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.677 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.677 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.677 14:45:00 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:17.677 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.677 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.677 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.677 14:45:00 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:17.677 14:45:00 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:17.677 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.677 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.677 Null3 00:08:17.677 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.677 14:45:00 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:17.677 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.677 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.677 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.677 14:45:00 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:17.677 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.677 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.677 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.677 14:45:00 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:17.677 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.677 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.677 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.677 14:45:00 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:17.677 14:45:00 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:17.677 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.677 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.677 Null4 00:08:17.677 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.677 14:45:00 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:17.677 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.677 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.677 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.677 14:45:00 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:17.677 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.677 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.677 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.677 14:45:00 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:17.677 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.677 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.677 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.677 14:45:00 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:17.677 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.677 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.677 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.677 14:45:00 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:17.677 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.677 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.677 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.677 14:45:00 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:08:17.937 00:08:17.937 Discovery Log Number of Records 6, Generation counter 6 00:08:17.937 =====Discovery Log Entry 0====== 00:08:17.937 trtype: tcp 00:08:17.937 adrfam: ipv4 00:08:17.937 subtype: current discovery subsystem 00:08:17.937 treq: not required 00:08:17.937 portid: 0 00:08:17.937 trsvcid: 4420 00:08:17.937 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:17.937 traddr: 10.0.0.2 00:08:17.937 eflags: explicit discovery connections, duplicate discovery information 00:08:17.937 sectype: none 00:08:17.937 =====Discovery Log Entry 1====== 00:08:17.937 trtype: tcp 00:08:17.937 adrfam: ipv4 00:08:17.937 subtype: nvme subsystem 00:08:17.937 treq: not required 00:08:17.937 portid: 0 00:08:17.937 trsvcid: 4420 00:08:17.937 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:17.937 traddr: 10.0.0.2 00:08:17.937 eflags: none 00:08:17.937 sectype: none 00:08:17.937 =====Discovery Log Entry 2====== 00:08:17.937 trtype: tcp 00:08:17.937 adrfam: ipv4 00:08:17.937 subtype: nvme subsystem 00:08:17.937 treq: not required 00:08:17.937 portid: 0 00:08:17.937 trsvcid: 4420 00:08:17.937 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:17.937 traddr: 10.0.0.2 00:08:17.937 eflags: none 00:08:17.937 sectype: none 00:08:17.937 =====Discovery Log Entry 3====== 00:08:17.937 trtype: tcp 00:08:17.937 adrfam: ipv4 00:08:17.937 subtype: nvme subsystem 00:08:17.937 treq: not required 00:08:17.937 portid: 0 00:08:17.937 trsvcid: 4420 00:08:17.937 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:17.937 traddr: 10.0.0.2 00:08:17.937 eflags: none 00:08:17.937 sectype: none 00:08:17.937 =====Discovery Log Entry 4====== 00:08:17.937 trtype: tcp 00:08:17.937 adrfam: ipv4 00:08:17.937 subtype: nvme subsystem 00:08:17.937 treq: not required 00:08:17.937 portid: 0 00:08:17.937 trsvcid: 4420 00:08:17.937 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:17.937 traddr: 10.0.0.2 00:08:17.937 eflags: none 00:08:17.937 sectype: none 00:08:17.937 =====Discovery Log Entry 5====== 00:08:17.937 trtype: tcp 00:08:17.937 adrfam: ipv4 00:08:17.937 subtype: discovery subsystem referral 00:08:17.937 treq: not required 00:08:17.937 portid: 0 00:08:17.937 trsvcid: 4430 00:08:17.937 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:17.937 traddr: 10.0.0.2 00:08:17.937 eflags: none 00:08:17.937 sectype: none 00:08:17.937 14:45:00 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:17.937 Perform nvmf subsystem discovery via RPC 00:08:17.937 14:45:00 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:17.937 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.937 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.937 [2024-04-26 14:45:00.442785] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:17.937 [ 00:08:17.937 { 00:08:17.937 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:17.937 "subtype": "Discovery", 00:08:17.937 "listen_addresses": [ 00:08:17.937 { 00:08:17.937 "transport": "TCP", 00:08:17.937 "trtype": "TCP", 00:08:17.937 "adrfam": "IPv4", 00:08:17.937 "traddr": "10.0.0.2", 00:08:17.937 "trsvcid": "4420" 00:08:17.937 } 00:08:17.937 ], 00:08:17.937 "allow_any_host": true, 00:08:17.937 "hosts": [] 00:08:17.937 }, 00:08:17.937 { 00:08:17.937 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:17.937 "subtype": "NVMe", 00:08:17.937 "listen_addresses": [ 00:08:17.937 { 00:08:17.937 "transport": "TCP", 00:08:17.937 "trtype": "TCP", 00:08:17.937 "adrfam": "IPv4", 00:08:17.937 "traddr": "10.0.0.2", 00:08:17.937 "trsvcid": "4420" 00:08:17.937 } 00:08:17.937 ], 00:08:17.937 "allow_any_host": true, 00:08:17.937 "hosts": [], 00:08:17.937 "serial_number": "SPDK00000000000001", 00:08:17.937 "model_number": "SPDK bdev Controller", 00:08:17.937 "max_namespaces": 32, 00:08:17.937 "min_cntlid": 1, 00:08:17.938 "max_cntlid": 65519, 00:08:17.938 "namespaces": [ 00:08:17.938 { 00:08:17.938 "nsid": 1, 00:08:17.938 "bdev_name": "Null1", 00:08:17.938 "name": "Null1", 00:08:17.938 "nguid": "BEFB013F17F1461E80D99B65E7335975", 00:08:17.938 "uuid": "befb013f-17f1-461e-80d9-9b65e7335975" 00:08:17.938 } 00:08:17.938 ] 00:08:17.938 }, 00:08:17.938 { 00:08:17.938 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:17.938 "subtype": "NVMe", 00:08:17.938 "listen_addresses": [ 00:08:17.938 { 00:08:17.938 "transport": "TCP", 00:08:17.938 "trtype": "TCP", 00:08:17.938 "adrfam": "IPv4", 00:08:17.938 "traddr": "10.0.0.2", 00:08:17.938 "trsvcid": "4420" 00:08:17.938 } 00:08:17.938 ], 00:08:17.938 "allow_any_host": true, 00:08:17.938 "hosts": [], 00:08:17.938 "serial_number": "SPDK00000000000002", 00:08:17.938 "model_number": "SPDK bdev Controller", 00:08:17.938 "max_namespaces": 32, 00:08:17.938 "min_cntlid": 1, 00:08:17.938 "max_cntlid": 65519, 00:08:17.938 "namespaces": [ 00:08:17.938 { 00:08:17.938 "nsid": 1, 00:08:17.938 "bdev_name": "Null2", 00:08:17.938 "name": "Null2", 00:08:17.938 "nguid": "3D8FB8D67723413B990DB2C471BBB90B", 00:08:17.938 "uuid": "3d8fb8d6-7723-413b-990d-b2c471bbb90b" 00:08:17.938 } 00:08:17.938 ] 00:08:17.938 }, 00:08:17.938 { 00:08:17.938 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:17.938 "subtype": "NVMe", 00:08:17.938 "listen_addresses": [ 00:08:17.938 { 00:08:17.938 "transport": "TCP", 00:08:17.938 "trtype": "TCP", 00:08:17.938 "adrfam": "IPv4", 00:08:17.938 "traddr": "10.0.0.2", 00:08:17.938 "trsvcid": "4420" 00:08:17.938 } 00:08:17.938 ], 00:08:17.938 "allow_any_host": true, 00:08:17.938 "hosts": [], 00:08:17.938 "serial_number": "SPDK00000000000003", 00:08:17.938 "model_number": "SPDK bdev Controller", 00:08:17.938 "max_namespaces": 32, 00:08:17.938 "min_cntlid": 1, 00:08:17.938 "max_cntlid": 65519, 00:08:17.938 "namespaces": [ 00:08:17.938 { 00:08:17.938 "nsid": 1, 00:08:17.938 "bdev_name": "Null3", 00:08:17.938 "name": "Null3", 00:08:17.938 "nguid": "8839F7228FD54A0EAA4107CE43267399", 00:08:17.938 "uuid": "8839f722-8fd5-4a0e-aa41-07ce43267399" 00:08:17.938 } 00:08:17.938 ] 00:08:17.938 }, 00:08:17.938 { 00:08:17.938 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:17.938 "subtype": "NVMe", 00:08:17.938 "listen_addresses": [ 00:08:17.938 { 00:08:17.938 "transport": "TCP", 00:08:17.938 "trtype": "TCP", 00:08:17.938 "adrfam": "IPv4", 00:08:17.938 "traddr": "10.0.0.2", 00:08:17.938 "trsvcid": "4420" 00:08:17.938 } 00:08:17.938 ], 00:08:17.938 "allow_any_host": true, 00:08:17.938 "hosts": [], 00:08:17.938 "serial_number": "SPDK00000000000004", 00:08:17.938 "model_number": "SPDK bdev Controller", 00:08:17.938 "max_namespaces": 32, 00:08:17.938 "min_cntlid": 1, 00:08:17.938 "max_cntlid": 65519, 00:08:17.938 "namespaces": [ 00:08:17.938 { 00:08:17.938 "nsid": 1, 00:08:17.938 "bdev_name": "Null4", 00:08:17.938 "name": "Null4", 00:08:17.938 "nguid": "D28BE42F9DA549B5BA92DC05F998B1C5", 00:08:17.938 "uuid": "d28be42f-9da5-49b5-ba92-dc05f998b1c5" 00:08:17.938 } 00:08:17.938 ] 00:08:17.938 } 00:08:17.938 ] 00:08:17.938 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.938 14:45:00 -- target/discovery.sh@42 -- # seq 1 4 00:08:17.938 14:45:00 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:17.938 14:45:00 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:17.938 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.938 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.938 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.938 14:45:00 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:17.938 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.938 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.938 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.938 14:45:00 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:17.938 14:45:00 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:17.938 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.938 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.938 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.938 14:45:00 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:17.938 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.938 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.938 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.938 14:45:00 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:17.938 14:45:00 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:17.938 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.938 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.938 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.938 14:45:00 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:17.938 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.938 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.938 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.938 14:45:00 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:17.938 14:45:00 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:17.938 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.938 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.938 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.938 14:45:00 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:17.938 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.938 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.938 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.938 14:45:00 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:17.938 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.938 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.938 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.938 14:45:00 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:17.938 14:45:00 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:17.938 14:45:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.938 14:45:00 -- common/autotest_common.sh@10 -- # set +x 00:08:17.938 14:45:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:18.198 14:45:00 -- target/discovery.sh@49 -- # check_bdevs= 00:08:18.198 14:45:00 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:18.198 14:45:00 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:18.198 14:45:00 -- target/discovery.sh@57 -- # nvmftestfini 00:08:18.198 14:45:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:18.198 14:45:00 -- nvmf/common.sh@117 -- # sync 00:08:18.198 14:45:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:18.198 14:45:00 -- nvmf/common.sh@120 -- # set +e 00:08:18.198 14:45:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:18.198 14:45:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:18.198 rmmod nvme_tcp 00:08:18.198 rmmod nvme_fabrics 00:08:18.198 rmmod nvme_keyring 00:08:18.198 14:45:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:18.198 14:45:00 -- nvmf/common.sh@124 -- # set -e 00:08:18.198 14:45:00 -- nvmf/common.sh@125 -- # return 0 00:08:18.198 14:45:00 -- nvmf/common.sh@478 -- # '[' -n 904340 ']' 00:08:18.198 14:45:00 -- nvmf/common.sh@479 -- # killprocess 904340 00:08:18.198 14:45:00 -- common/autotest_common.sh@936 -- # '[' -z 904340 ']' 00:08:18.198 14:45:00 -- common/autotest_common.sh@940 -- # kill -0 904340 00:08:18.198 14:45:00 -- common/autotest_common.sh@941 -- # uname 00:08:18.198 14:45:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:18.198 14:45:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 904340 00:08:18.199 14:45:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:18.199 14:45:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:18.199 14:45:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 904340' 00:08:18.199 killing process with pid 904340 00:08:18.199 14:45:00 -- common/autotest_common.sh@955 -- # kill 904340 00:08:18.199 [2024-04-26 14:45:00.726435] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:18.199 14:45:00 -- common/autotest_common.sh@960 -- # wait 904340 00:08:18.199 14:45:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:18.199 14:45:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:18.199 14:45:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:18.199 14:45:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:18.199 14:45:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:18.199 14:45:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.199 14:45:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.199 14:45:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.749 14:45:02 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:20.749 00:08:20.749 real 0m11.095s 00:08:20.749 user 0m8.051s 00:08:20.749 sys 0m5.661s 00:08:20.749 14:45:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:20.749 14:45:02 -- common/autotest_common.sh@10 -- # set +x 00:08:20.749 ************************************ 00:08:20.749 END TEST nvmf_discovery 00:08:20.749 ************************************ 00:08:20.749 14:45:02 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:20.749 14:45:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:20.749 14:45:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:20.749 14:45:02 -- common/autotest_common.sh@10 -- # set +x 00:08:20.749 ************************************ 00:08:20.749 START TEST nvmf_referrals 00:08:20.749 ************************************ 00:08:20.749 14:45:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:20.749 * Looking for test storage... 00:08:20.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.749 14:45:03 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.749 14:45:03 -- nvmf/common.sh@7 -- # uname -s 00:08:20.749 14:45:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.749 14:45:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.749 14:45:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.749 14:45:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.749 14:45:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.749 14:45:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.749 14:45:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.749 14:45:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.749 14:45:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.749 14:45:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.749 14:45:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:20.749 14:45:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:20.749 14:45:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.749 14:45:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.749 14:45:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.749 14:45:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.749 14:45:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.749 14:45:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.749 14:45:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.749 14:45:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.750 14:45:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.750 14:45:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.750 14:45:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.750 14:45:03 -- paths/export.sh@5 -- # export PATH 00:08:20.750 14:45:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.750 14:45:03 -- nvmf/common.sh@47 -- # : 0 00:08:20.750 14:45:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:20.750 14:45:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:20.750 14:45:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.750 14:45:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.750 14:45:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.750 14:45:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:20.750 14:45:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:20.750 14:45:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:20.750 14:45:03 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:20.750 14:45:03 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:20.750 14:45:03 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:20.750 14:45:03 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:20.750 14:45:03 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:20.750 14:45:03 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:20.750 14:45:03 -- target/referrals.sh@37 -- # nvmftestinit 00:08:20.750 14:45:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:20.750 14:45:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.750 14:45:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:20.750 14:45:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:20.750 14:45:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:20.750 14:45:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.750 14:45:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:20.750 14:45:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.750 14:45:03 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:20.750 14:45:03 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:20.750 14:45:03 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:20.750 14:45:03 -- common/autotest_common.sh@10 -- # set +x 00:08:28.894 14:45:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:28.894 14:45:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:28.894 14:45:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:28.894 14:45:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:28.894 14:45:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:28.894 14:45:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:28.894 14:45:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:28.894 14:45:10 -- nvmf/common.sh@295 -- # net_devs=() 00:08:28.894 14:45:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:28.894 14:45:10 -- nvmf/common.sh@296 -- # e810=() 00:08:28.894 14:45:10 -- nvmf/common.sh@296 -- # local -ga e810 00:08:28.894 14:45:10 -- nvmf/common.sh@297 -- # x722=() 00:08:28.894 14:45:10 -- nvmf/common.sh@297 -- # local -ga x722 00:08:28.894 14:45:10 -- nvmf/common.sh@298 -- # mlx=() 00:08:28.894 14:45:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:28.894 14:45:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:28.894 14:45:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:28.894 14:45:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:28.894 14:45:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:28.894 14:45:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:28.894 14:45:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:28.894 14:45:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:28.894 14:45:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:28.894 14:45:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:28.894 14:45:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:28.894 14:45:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:28.894 14:45:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:28.894 14:45:10 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:28.894 14:45:10 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:28.894 14:45:10 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:28.894 14:45:10 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:28.894 14:45:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:28.894 14:45:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:28.894 14:45:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:28.894 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:28.894 14:45:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:28.894 14:45:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:28.894 14:45:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.894 14:45:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.894 14:45:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:28.894 14:45:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:28.894 14:45:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:28.894 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:28.894 14:45:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:28.894 14:45:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:28.894 14:45:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.894 14:45:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.894 14:45:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:28.894 14:45:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:28.894 14:45:10 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:28.894 14:45:10 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:28.894 14:45:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:28.894 14:45:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.894 14:45:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:28.894 14:45:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.894 14:45:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:28.894 Found net devices under 0000:31:00.0: cvl_0_0 00:08:28.894 14:45:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.894 14:45:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:28.894 14:45:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.894 14:45:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:28.894 14:45:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.894 14:45:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:28.894 Found net devices under 0000:31:00.1: cvl_0_1 00:08:28.894 14:45:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.895 14:45:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:28.895 14:45:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:28.895 14:45:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:28.895 14:45:10 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:28.895 14:45:10 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:28.895 14:45:10 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:28.895 14:45:10 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:28.895 14:45:10 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:28.895 14:45:10 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:28.895 14:45:10 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:28.895 14:45:10 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:28.895 14:45:10 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:28.895 14:45:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:28.895 14:45:10 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:28.895 14:45:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:28.895 14:45:10 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:28.895 14:45:10 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:28.895 14:45:10 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:28.895 14:45:10 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:28.895 14:45:10 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:28.895 14:45:10 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:28.895 14:45:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:28.895 14:45:10 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:28.895 14:45:10 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:28.895 14:45:10 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:28.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:28.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:08:28.895 00:08:28.895 --- 10.0.0.2 ping statistics --- 00:08:28.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.895 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:08:28.895 14:45:10 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:28.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:28.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:08:28.895 00:08:28.895 --- 10.0.0.1 ping statistics --- 00:08:28.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.895 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:08:28.895 14:45:10 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:28.895 14:45:10 -- nvmf/common.sh@411 -- # return 0 00:08:28.895 14:45:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:28.895 14:45:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:28.895 14:45:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:28.895 14:45:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:28.895 14:45:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:28.895 14:45:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:28.895 14:45:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:28.895 14:45:10 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:28.895 14:45:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:28.895 14:45:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:28.895 14:45:10 -- common/autotest_common.sh@10 -- # set +x 00:08:28.895 14:45:10 -- nvmf/common.sh@470 -- # nvmfpid=909535 00:08:28.895 14:45:10 -- nvmf/common.sh@471 -- # waitforlisten 909535 00:08:28.895 14:45:10 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:28.895 14:45:10 -- common/autotest_common.sh@817 -- # '[' -z 909535 ']' 00:08:28.895 14:45:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.895 14:45:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:28.895 14:45:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.895 14:45:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:28.895 14:45:10 -- common/autotest_common.sh@10 -- # set +x 00:08:28.895 [2024-04-26 14:45:10.716559] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:08:28.895 [2024-04-26 14:45:10.716625] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.895 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.895 [2024-04-26 14:45:10.789187] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:28.895 [2024-04-26 14:45:10.861755] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.895 [2024-04-26 14:45:10.861799] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.895 [2024-04-26 14:45:10.861808] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.895 [2024-04-26 14:45:10.861816] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.895 [2024-04-26 14:45:10.861823] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.895 [2024-04-26 14:45:10.861909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.895 [2024-04-26 14:45:10.862141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.895 [2024-04-26 14:45:10.862297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.895 [2024-04-26 14:45:10.862298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.895 14:45:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:28.895 14:45:11 -- common/autotest_common.sh@850 -- # return 0 00:08:28.895 14:45:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:28.895 14:45:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:28.895 14:45:11 -- common/autotest_common.sh@10 -- # set +x 00:08:28.895 14:45:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.895 14:45:11 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:28.895 14:45:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:28.895 14:45:11 -- common/autotest_common.sh@10 -- # set +x 00:08:28.895 [2024-04-26 14:45:11.544450] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.895 14:45:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:28.895 14:45:11 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:28.895 14:45:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:28.895 14:45:11 -- common/autotest_common.sh@10 -- # set +x 00:08:29.155 [2024-04-26 14:45:11.560632] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:29.155 14:45:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.155 14:45:11 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:29.155 14:45:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:29.155 14:45:11 -- common/autotest_common.sh@10 -- # set +x 00:08:29.155 14:45:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.155 14:45:11 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:29.156 14:45:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:29.156 14:45:11 -- common/autotest_common.sh@10 -- # set +x 00:08:29.156 14:45:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.156 14:45:11 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:29.156 14:45:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:29.156 14:45:11 -- common/autotest_common.sh@10 -- # set +x 00:08:29.156 14:45:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.156 14:45:11 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:29.156 14:45:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:29.156 14:45:11 -- target/referrals.sh@48 -- # jq length 00:08:29.156 14:45:11 -- common/autotest_common.sh@10 -- # set +x 00:08:29.156 14:45:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.156 14:45:11 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:29.156 14:45:11 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:29.156 14:45:11 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:29.156 14:45:11 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:29.156 14:45:11 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:29.156 14:45:11 -- target/referrals.sh@21 -- # sort 00:08:29.156 14:45:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:29.156 14:45:11 -- common/autotest_common.sh@10 -- # set +x 00:08:29.156 14:45:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.156 14:45:11 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:29.156 14:45:11 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:29.156 14:45:11 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:29.156 14:45:11 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:29.156 14:45:11 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:29.156 14:45:11 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:29.156 14:45:11 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:29.156 14:45:11 -- target/referrals.sh@26 -- # sort 00:08:29.417 14:45:11 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:29.417 14:45:11 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:29.417 14:45:11 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:29.417 14:45:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:29.417 14:45:11 -- common/autotest_common.sh@10 -- # set +x 00:08:29.417 14:45:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.417 14:45:11 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:29.417 14:45:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:29.417 14:45:11 -- common/autotest_common.sh@10 -- # set +x 00:08:29.417 14:45:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.417 14:45:11 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:29.417 14:45:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:29.417 14:45:11 -- common/autotest_common.sh@10 -- # set +x 00:08:29.417 14:45:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.417 14:45:11 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:29.417 14:45:11 -- target/referrals.sh@56 -- # jq length 00:08:29.417 14:45:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:29.417 14:45:11 -- common/autotest_common.sh@10 -- # set +x 00:08:29.417 14:45:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.417 14:45:11 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:29.417 14:45:11 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:29.417 14:45:11 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:29.417 14:45:11 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:29.417 14:45:11 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:29.417 14:45:11 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:29.417 14:45:11 -- target/referrals.sh@26 -- # sort 00:08:29.677 14:45:12 -- target/referrals.sh@26 -- # echo 00:08:29.677 14:45:12 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:29.677 14:45:12 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:29.677 14:45:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:29.677 14:45:12 -- common/autotest_common.sh@10 -- # set +x 00:08:29.677 14:45:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.677 14:45:12 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:29.677 14:45:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:29.677 14:45:12 -- common/autotest_common.sh@10 -- # set +x 00:08:29.677 14:45:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.677 14:45:12 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:29.678 14:45:12 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:29.678 14:45:12 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:29.678 14:45:12 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:29.678 14:45:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:29.678 14:45:12 -- common/autotest_common.sh@10 -- # set +x 00:08:29.678 14:45:12 -- target/referrals.sh@21 -- # sort 00:08:29.678 14:45:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:29.678 14:45:12 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:29.678 14:45:12 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:29.678 14:45:12 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:29.678 14:45:12 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:29.678 14:45:12 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:29.678 14:45:12 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:29.678 14:45:12 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:29.678 14:45:12 -- target/referrals.sh@26 -- # sort 00:08:29.937 14:45:12 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:29.937 14:45:12 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:29.937 14:45:12 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:29.937 14:45:12 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:29.937 14:45:12 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:29.937 14:45:12 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:29.937 14:45:12 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:29.937 14:45:12 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:29.937 14:45:12 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:29.937 14:45:12 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:29.937 14:45:12 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:29.937 14:45:12 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:29.937 14:45:12 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:30.197 14:45:12 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:30.197 14:45:12 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:30.197 14:45:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.197 14:45:12 -- common/autotest_common.sh@10 -- # set +x 00:08:30.197 14:45:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.197 14:45:12 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:30.197 14:45:12 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:30.197 14:45:12 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:30.197 14:45:12 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:30.197 14:45:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.197 14:45:12 -- common/autotest_common.sh@10 -- # set +x 00:08:30.197 14:45:12 -- target/referrals.sh@21 -- # sort 00:08:30.197 14:45:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.197 14:45:12 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:30.197 14:45:12 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:30.197 14:45:12 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:30.198 14:45:12 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:30.198 14:45:12 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:30.198 14:45:12 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:30.198 14:45:12 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:30.198 14:45:12 -- target/referrals.sh@26 -- # sort 00:08:30.198 14:45:12 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:30.198 14:45:12 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:30.198 14:45:12 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:30.198 14:45:12 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:30.198 14:45:12 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:30.198 14:45:12 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:30.198 14:45:12 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:30.458 14:45:12 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:30.458 14:45:12 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:30.458 14:45:12 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:30.458 14:45:12 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:30.458 14:45:12 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:30.458 14:45:12 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:30.458 14:45:13 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:30.458 14:45:13 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:30.458 14:45:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.458 14:45:13 -- common/autotest_common.sh@10 -- # set +x 00:08:30.458 14:45:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.458 14:45:13 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:30.458 14:45:13 -- target/referrals.sh@82 -- # jq length 00:08:30.458 14:45:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.458 14:45:13 -- common/autotest_common.sh@10 -- # set +x 00:08:30.458 14:45:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.719 14:45:13 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:30.719 14:45:13 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:30.719 14:45:13 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:30.719 14:45:13 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:30.719 14:45:13 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:30.719 14:45:13 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:30.719 14:45:13 -- target/referrals.sh@26 -- # sort 00:08:30.719 14:45:13 -- target/referrals.sh@26 -- # echo 00:08:30.719 14:45:13 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:30.719 14:45:13 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:30.719 14:45:13 -- target/referrals.sh@86 -- # nvmftestfini 00:08:30.719 14:45:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:30.719 14:45:13 -- nvmf/common.sh@117 -- # sync 00:08:30.719 14:45:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:30.719 14:45:13 -- nvmf/common.sh@120 -- # set +e 00:08:30.719 14:45:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:30.719 14:45:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:30.719 rmmod nvme_tcp 00:08:30.719 rmmod nvme_fabrics 00:08:30.719 rmmod nvme_keyring 00:08:30.719 14:45:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:30.719 14:45:13 -- nvmf/common.sh@124 -- # set -e 00:08:30.719 14:45:13 -- nvmf/common.sh@125 -- # return 0 00:08:30.719 14:45:13 -- nvmf/common.sh@478 -- # '[' -n 909535 ']' 00:08:30.719 14:45:13 -- nvmf/common.sh@479 -- # killprocess 909535 00:08:30.719 14:45:13 -- common/autotest_common.sh@936 -- # '[' -z 909535 ']' 00:08:30.719 14:45:13 -- common/autotest_common.sh@940 -- # kill -0 909535 00:08:30.719 14:45:13 -- common/autotest_common.sh@941 -- # uname 00:08:30.719 14:45:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:30.719 14:45:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 909535 00:08:30.980 14:45:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:30.980 14:45:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:30.980 14:45:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 909535' 00:08:30.980 killing process with pid 909535 00:08:30.980 14:45:13 -- common/autotest_common.sh@955 -- # kill 909535 00:08:30.980 14:45:13 -- common/autotest_common.sh@960 -- # wait 909535 00:08:30.980 14:45:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:30.980 14:45:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:30.980 14:45:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:30.980 14:45:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:30.980 14:45:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:30.980 14:45:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.980 14:45:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.980 14:45:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.523 14:45:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:33.523 00:08:33.523 real 0m12.455s 00:08:33.523 user 0m13.608s 00:08:33.523 sys 0m6.197s 00:08:33.523 14:45:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:33.523 14:45:15 -- common/autotest_common.sh@10 -- # set +x 00:08:33.523 ************************************ 00:08:33.523 END TEST nvmf_referrals 00:08:33.523 ************************************ 00:08:33.523 14:45:15 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:33.523 14:45:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:33.523 14:45:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:33.523 14:45:15 -- common/autotest_common.sh@10 -- # set +x 00:08:33.523 ************************************ 00:08:33.523 START TEST nvmf_connect_disconnect 00:08:33.523 ************************************ 00:08:33.523 14:45:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:33.523 * Looking for test storage... 00:08:33.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:33.523 14:45:15 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:33.523 14:45:15 -- nvmf/common.sh@7 -- # uname -s 00:08:33.523 14:45:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.523 14:45:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.523 14:45:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.523 14:45:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.523 14:45:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.523 14:45:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.523 14:45:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.523 14:45:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.523 14:45:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.523 14:45:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.523 14:45:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:33.523 14:45:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:33.523 14:45:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.523 14:45:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.523 14:45:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:33.523 14:45:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.523 14:45:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:33.523 14:45:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.523 14:45:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.523 14:45:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.523 14:45:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.523 14:45:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.523 14:45:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.523 14:45:15 -- paths/export.sh@5 -- # export PATH 00:08:33.523 14:45:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.523 14:45:15 -- nvmf/common.sh@47 -- # : 0 00:08:33.523 14:45:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:33.523 14:45:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:33.523 14:45:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.523 14:45:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.523 14:45:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.523 14:45:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:33.523 14:45:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:33.523 14:45:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:33.523 14:45:15 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:33.523 14:45:15 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:33.523 14:45:15 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:33.523 14:45:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:33.523 14:45:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.523 14:45:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:33.523 14:45:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:33.523 14:45:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:33.523 14:45:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.523 14:45:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:33.523 14:45:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.523 14:45:15 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:33.523 14:45:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:33.523 14:45:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:33.523 14:45:15 -- common/autotest_common.sh@10 -- # set +x 00:08:40.109 14:45:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:40.109 14:45:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:40.109 14:45:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:40.109 14:45:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:40.109 14:45:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:40.109 14:45:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:40.109 14:45:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:40.109 14:45:22 -- nvmf/common.sh@295 -- # net_devs=() 00:08:40.109 14:45:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:40.109 14:45:22 -- nvmf/common.sh@296 -- # e810=() 00:08:40.109 14:45:22 -- nvmf/common.sh@296 -- # local -ga e810 00:08:40.109 14:45:22 -- nvmf/common.sh@297 -- # x722=() 00:08:40.109 14:45:22 -- nvmf/common.sh@297 -- # local -ga x722 00:08:40.109 14:45:22 -- nvmf/common.sh@298 -- # mlx=() 00:08:40.109 14:45:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:40.109 14:45:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.109 14:45:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.109 14:45:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.109 14:45:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.109 14:45:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.109 14:45:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.109 14:45:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.109 14:45:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.109 14:45:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.109 14:45:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.109 14:45:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.109 14:45:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:40.109 14:45:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:40.109 14:45:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:40.109 14:45:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:40.109 14:45:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:40.109 14:45:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:40.109 14:45:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.109 14:45:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:40.109 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:40.109 14:45:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.109 14:45:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.109 14:45:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.109 14:45:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.109 14:45:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.109 14:45:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.109 14:45:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:40.109 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:40.109 14:45:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.109 14:45:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.109 14:45:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.109 14:45:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.109 14:45:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.109 14:45:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:40.109 14:45:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:40.109 14:45:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:40.109 14:45:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.109 14:45:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.109 14:45:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:40.109 14:45:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.109 14:45:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:40.109 Found net devices under 0000:31:00.0: cvl_0_0 00:08:40.109 14:45:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.109 14:45:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.110 14:45:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.110 14:45:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:40.110 14:45:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.110 14:45:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:40.110 Found net devices under 0000:31:00.1: cvl_0_1 00:08:40.110 14:45:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.110 14:45:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:40.110 14:45:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:40.110 14:45:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:40.110 14:45:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:40.370 14:45:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:40.370 14:45:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.370 14:45:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.370 14:45:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:40.370 14:45:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:40.370 14:45:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:40.370 14:45:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:40.370 14:45:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:40.370 14:45:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:40.370 14:45:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.370 14:45:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:40.370 14:45:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:40.370 14:45:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:40.370 14:45:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:40.370 14:45:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:40.370 14:45:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:40.370 14:45:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:40.370 14:45:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:40.631 14:45:23 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:40.631 14:45:23 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:40.631 14:45:23 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:40.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:08:40.631 00:08:40.631 --- 10.0.0.2 ping statistics --- 00:08:40.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.631 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:08:40.631 14:45:23 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:40.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:08:40.631 00:08:40.631 --- 10.0.0.1 ping statistics --- 00:08:40.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.631 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:08:40.631 14:45:23 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.631 14:45:23 -- nvmf/common.sh@411 -- # return 0 00:08:40.631 14:45:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:40.631 14:45:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.631 14:45:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:40.631 14:45:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:40.631 14:45:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.631 14:45:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:40.631 14:45:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:40.631 14:45:23 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:40.631 14:45:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:40.631 14:45:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:40.631 14:45:23 -- common/autotest_common.sh@10 -- # set +x 00:08:40.631 14:45:23 -- nvmf/common.sh@470 -- # nvmfpid=914381 00:08:40.631 14:45:23 -- nvmf/common.sh@471 -- # waitforlisten 914381 00:08:40.631 14:45:23 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:40.631 14:45:23 -- common/autotest_common.sh@817 -- # '[' -z 914381 ']' 00:08:40.631 14:45:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.631 14:45:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:40.631 14:45:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.631 14:45:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:40.631 14:45:23 -- common/autotest_common.sh@10 -- # set +x 00:08:40.631 [2024-04-26 14:45:23.178880] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:08:40.631 [2024-04-26 14:45:23.178932] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.631 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.631 [2024-04-26 14:45:23.250847] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:40.891 [2024-04-26 14:45:23.321110] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.891 [2024-04-26 14:45:23.321150] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.891 [2024-04-26 14:45:23.321159] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.891 [2024-04-26 14:45:23.321170] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.891 [2024-04-26 14:45:23.321178] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.891 [2024-04-26 14:45:23.321320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.891 [2024-04-26 14:45:23.321437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.891 [2024-04-26 14:45:23.321457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.891 [2024-04-26 14:45:23.321461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.461 14:45:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:41.461 14:45:23 -- common/autotest_common.sh@850 -- # return 0 00:08:41.461 14:45:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:41.461 14:45:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:41.461 14:45:23 -- common/autotest_common.sh@10 -- # set +x 00:08:41.461 14:45:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.461 14:45:23 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:41.461 14:45:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:41.461 14:45:23 -- common/autotest_common.sh@10 -- # set +x 00:08:41.461 [2024-04-26 14:45:24.004443] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.461 14:45:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:41.461 14:45:24 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:41.461 14:45:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:41.461 14:45:24 -- common/autotest_common.sh@10 -- # set +x 00:08:41.461 14:45:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:41.461 14:45:24 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:41.461 14:45:24 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:41.461 14:45:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:41.461 14:45:24 -- common/autotest_common.sh@10 -- # set +x 00:08:41.461 14:45:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:41.461 14:45:24 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:41.461 14:45:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:41.461 14:45:24 -- common/autotest_common.sh@10 -- # set +x 00:08:41.461 14:45:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:41.461 14:45:24 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:41.461 14:45:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:41.461 14:45:24 -- common/autotest_common.sh@10 -- # set +x 00:08:41.461 [2024-04-26 14:45:24.063728] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.461 14:45:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:41.461 14:45:24 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:41.461 14:45:24 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:41.461 14:45:24 -- target/connect_disconnect.sh@34 -- # set +x 00:08:45.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.887 14:45:42 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:59.887 14:45:42 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:59.887 14:45:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:59.887 14:45:42 -- nvmf/common.sh@117 -- # sync 00:08:59.887 14:45:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:59.887 14:45:42 -- nvmf/common.sh@120 -- # set +e 00:08:59.887 14:45:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:59.887 14:45:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:59.887 rmmod nvme_tcp 00:08:59.887 rmmod nvme_fabrics 00:08:59.887 rmmod nvme_keyring 00:08:59.887 14:45:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:59.887 14:45:42 -- nvmf/common.sh@124 -- # set -e 00:08:59.887 14:45:42 -- nvmf/common.sh@125 -- # return 0 00:08:59.887 14:45:42 -- nvmf/common.sh@478 -- # '[' -n 914381 ']' 00:08:59.887 14:45:42 -- nvmf/common.sh@479 -- # killprocess 914381 00:08:59.887 14:45:42 -- common/autotest_common.sh@936 -- # '[' -z 914381 ']' 00:08:59.887 14:45:42 -- common/autotest_common.sh@940 -- # kill -0 914381 00:08:59.887 14:45:42 -- common/autotest_common.sh@941 -- # uname 00:08:59.887 14:45:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:59.887 14:45:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 914381 00:08:59.887 14:45:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:59.887 14:45:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:59.888 14:45:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 914381' 00:08:59.888 killing process with pid 914381 00:08:59.888 14:45:42 -- common/autotest_common.sh@955 -- # kill 914381 00:08:59.888 14:45:42 -- common/autotest_common.sh@960 -- # wait 914381 00:09:00.148 14:45:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:00.148 14:45:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:00.148 14:45:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:00.148 14:45:42 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:00.148 14:45:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:00.148 14:45:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.148 14:45:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.148 14:45:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.061 14:45:44 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:02.061 00:09:02.061 real 0m28.874s 00:09:02.061 user 1m18.830s 00:09:02.061 sys 0m6.553s 00:09:02.061 14:45:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:02.061 14:45:44 -- common/autotest_common.sh@10 -- # set +x 00:09:02.061 ************************************ 00:09:02.061 END TEST nvmf_connect_disconnect 00:09:02.061 ************************************ 00:09:02.061 14:45:44 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:02.061 14:45:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:02.061 14:45:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:02.061 14:45:44 -- common/autotest_common.sh@10 -- # set +x 00:09:02.323 ************************************ 00:09:02.323 START TEST nvmf_multitarget 00:09:02.323 ************************************ 00:09:02.323 14:45:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:02.323 * Looking for test storage... 00:09:02.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:02.323 14:45:44 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:02.323 14:45:44 -- nvmf/common.sh@7 -- # uname -s 00:09:02.323 14:45:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.323 14:45:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.323 14:45:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.323 14:45:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.323 14:45:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.323 14:45:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.323 14:45:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.323 14:45:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.323 14:45:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.323 14:45:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.323 14:45:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:02.323 14:45:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:02.323 14:45:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.323 14:45:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.323 14:45:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:02.323 14:45:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.584 14:45:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:02.585 14:45:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.585 14:45:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.585 14:45:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.585 14:45:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.585 14:45:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.585 14:45:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.585 14:45:44 -- paths/export.sh@5 -- # export PATH 00:09:02.585 14:45:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.585 14:45:44 -- nvmf/common.sh@47 -- # : 0 00:09:02.585 14:45:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:02.585 14:45:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:02.585 14:45:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.585 14:45:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.585 14:45:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.585 14:45:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:02.585 14:45:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:02.585 14:45:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:02.585 14:45:44 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:02.585 14:45:44 -- target/multitarget.sh@15 -- # nvmftestinit 00:09:02.585 14:45:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:02.585 14:45:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.585 14:45:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:02.585 14:45:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:02.585 14:45:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:02.585 14:45:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.585 14:45:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:02.585 14:45:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.585 14:45:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:02.585 14:45:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:02.585 14:45:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:02.585 14:45:45 -- common/autotest_common.sh@10 -- # set +x 00:09:09.175 14:45:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:09.175 14:45:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:09.175 14:45:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:09.175 14:45:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:09.175 14:45:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:09.175 14:45:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:09.175 14:45:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:09.175 14:45:51 -- nvmf/common.sh@295 -- # net_devs=() 00:09:09.175 14:45:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:09.175 14:45:51 -- nvmf/common.sh@296 -- # e810=() 00:09:09.175 14:45:51 -- nvmf/common.sh@296 -- # local -ga e810 00:09:09.175 14:45:51 -- nvmf/common.sh@297 -- # x722=() 00:09:09.175 14:45:51 -- nvmf/common.sh@297 -- # local -ga x722 00:09:09.175 14:45:51 -- nvmf/common.sh@298 -- # mlx=() 00:09:09.175 14:45:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:09.175 14:45:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:09.175 14:45:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:09.175 14:45:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:09.175 14:45:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:09.175 14:45:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:09.175 14:45:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:09.175 14:45:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:09.175 14:45:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:09.175 14:45:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:09.175 14:45:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:09.175 14:45:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:09.175 14:45:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:09.175 14:45:51 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:09.175 14:45:51 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:09.175 14:45:51 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:09.175 14:45:51 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:09.175 14:45:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:09.175 14:45:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:09.175 14:45:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:09.175 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:09.175 14:45:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:09.175 14:45:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:09.175 14:45:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.175 14:45:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.175 14:45:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:09.175 14:45:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:09.175 14:45:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:09.175 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:09.175 14:45:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:09.175 14:45:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:09.175 14:45:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.175 14:45:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.175 14:45:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:09.175 14:45:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:09.175 14:45:51 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:09.175 14:45:51 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:09.175 14:45:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:09.175 14:45:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.175 14:45:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:09.175 14:45:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.175 14:45:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:09.175 Found net devices under 0000:31:00.0: cvl_0_0 00:09:09.175 14:45:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.175 14:45:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:09.175 14:45:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.175 14:45:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:09.175 14:45:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.175 14:45:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:09.175 Found net devices under 0000:31:00.1: cvl_0_1 00:09:09.175 14:45:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.175 14:45:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:09.175 14:45:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:09.175 14:45:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:09.175 14:45:51 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:09.175 14:45:51 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:09.175 14:45:51 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.175 14:45:51 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.175 14:45:51 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:09.175 14:45:51 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:09.175 14:45:51 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:09.175 14:45:51 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:09.175 14:45:51 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:09.175 14:45:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:09.175 14:45:51 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.175 14:45:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:09.176 14:45:51 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:09.176 14:45:51 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:09.176 14:45:51 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:09.176 14:45:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:09.176 14:45:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:09.176 14:45:51 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:09.176 14:45:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:09.176 14:45:51 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:09.176 14:45:51 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:09.176 14:45:51 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:09.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:09:09.176 00:09:09.176 --- 10.0.0.2 ping statistics --- 00:09:09.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.176 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:09:09.176 14:45:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:09.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:09:09.436 00:09:09.436 --- 10.0.0.1 ping statistics --- 00:09:09.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.436 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:09:09.437 14:45:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.437 14:45:51 -- nvmf/common.sh@411 -- # return 0 00:09:09.437 14:45:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:09.437 14:45:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.437 14:45:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:09.437 14:45:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:09.437 14:45:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.437 14:45:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:09.437 14:45:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:09.437 14:45:51 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:09.437 14:45:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:09.437 14:45:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:09.437 14:45:51 -- common/autotest_common.sh@10 -- # set +x 00:09:09.437 14:45:51 -- nvmf/common.sh@470 -- # nvmfpid=922530 00:09:09.437 14:45:51 -- nvmf/common.sh@471 -- # waitforlisten 922530 00:09:09.437 14:45:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:09.437 14:45:51 -- common/autotest_common.sh@817 -- # '[' -z 922530 ']' 00:09:09.437 14:45:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.437 14:45:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:09.437 14:45:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.437 14:45:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:09.437 14:45:51 -- common/autotest_common.sh@10 -- # set +x 00:09:09.437 [2024-04-26 14:45:51.953031] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:09:09.437 [2024-04-26 14:45:51.953093] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.437 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.437 [2024-04-26 14:45:52.026740] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:09.437 [2024-04-26 14:45:52.099219] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.437 [2024-04-26 14:45:52.099265] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.437 [2024-04-26 14:45:52.099279] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.437 [2024-04-26 14:45:52.099286] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.437 [2024-04-26 14:45:52.099293] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.437 [2024-04-26 14:45:52.099456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.437 [2024-04-26 14:45:52.099558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.437 [2024-04-26 14:45:52.099716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.437 [2024-04-26 14:45:52.099717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.381 14:45:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:10.381 14:45:52 -- common/autotest_common.sh@850 -- # return 0 00:09:10.381 14:45:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:10.381 14:45:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:10.381 14:45:52 -- common/autotest_common.sh@10 -- # set +x 00:09:10.381 14:45:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.381 14:45:52 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:10.381 14:45:52 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:10.381 14:45:52 -- target/multitarget.sh@21 -- # jq length 00:09:10.381 14:45:52 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:10.381 14:45:52 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:10.381 "nvmf_tgt_1" 00:09:10.381 14:45:52 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:10.642 "nvmf_tgt_2" 00:09:10.642 14:45:53 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:10.642 14:45:53 -- target/multitarget.sh@28 -- # jq length 00:09:10.642 14:45:53 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:10.642 14:45:53 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:10.642 true 00:09:10.642 14:45:53 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:10.903 true 00:09:10.903 14:45:53 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:10.903 14:45:53 -- target/multitarget.sh@35 -- # jq length 00:09:10.903 14:45:53 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:10.903 14:45:53 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:10.903 14:45:53 -- target/multitarget.sh@41 -- # nvmftestfini 00:09:10.903 14:45:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:10.903 14:45:53 -- nvmf/common.sh@117 -- # sync 00:09:10.903 14:45:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:10.903 14:45:53 -- nvmf/common.sh@120 -- # set +e 00:09:10.903 14:45:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:10.903 14:45:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:10.903 rmmod nvme_tcp 00:09:10.903 rmmod nvme_fabrics 00:09:10.903 rmmod nvme_keyring 00:09:10.903 14:45:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:10.903 14:45:53 -- nvmf/common.sh@124 -- # set -e 00:09:10.903 14:45:53 -- nvmf/common.sh@125 -- # return 0 00:09:10.903 14:45:53 -- nvmf/common.sh@478 -- # '[' -n 922530 ']' 00:09:10.903 14:45:53 -- nvmf/common.sh@479 -- # killprocess 922530 00:09:10.903 14:45:53 -- common/autotest_common.sh@936 -- # '[' -z 922530 ']' 00:09:10.903 14:45:53 -- common/autotest_common.sh@940 -- # kill -0 922530 00:09:10.903 14:45:53 -- common/autotest_common.sh@941 -- # uname 00:09:10.903 14:45:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:10.903 14:45:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 922530 00:09:11.163 14:45:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:11.163 14:45:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:11.163 14:45:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 922530' 00:09:11.163 killing process with pid 922530 00:09:11.164 14:45:53 -- common/autotest_common.sh@955 -- # kill 922530 00:09:11.164 14:45:53 -- common/autotest_common.sh@960 -- # wait 922530 00:09:11.164 14:45:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:11.164 14:45:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:11.164 14:45:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:11.164 14:45:53 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:11.164 14:45:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:11.164 14:45:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.164 14:45:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:11.164 14:45:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.710 14:45:55 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:13.710 00:09:13.710 real 0m10.929s 00:09:13.710 user 0m9.076s 00:09:13.710 sys 0m5.631s 00:09:13.710 14:45:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:13.710 14:45:55 -- common/autotest_common.sh@10 -- # set +x 00:09:13.710 ************************************ 00:09:13.710 END TEST nvmf_multitarget 00:09:13.710 ************************************ 00:09:13.710 14:45:55 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:13.710 14:45:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:13.710 14:45:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:13.710 14:45:55 -- common/autotest_common.sh@10 -- # set +x 00:09:13.710 ************************************ 00:09:13.710 START TEST nvmf_rpc 00:09:13.710 ************************************ 00:09:13.710 14:45:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:13.710 * Looking for test storage... 00:09:13.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:13.710 14:45:56 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:13.710 14:45:56 -- nvmf/common.sh@7 -- # uname -s 00:09:13.710 14:45:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.710 14:45:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.710 14:45:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.710 14:45:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.710 14:45:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.710 14:45:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.710 14:45:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.710 14:45:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.710 14:45:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.710 14:45:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.710 14:45:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:13.710 14:45:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:13.710 14:45:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.710 14:45:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.710 14:45:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:13.710 14:45:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.710 14:45:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:13.710 14:45:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.710 14:45:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.710 14:45:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.710 14:45:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.710 14:45:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.710 14:45:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.710 14:45:56 -- paths/export.sh@5 -- # export PATH 00:09:13.710 14:45:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.710 14:45:56 -- nvmf/common.sh@47 -- # : 0 00:09:13.710 14:45:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:13.710 14:45:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:13.710 14:45:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.710 14:45:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.710 14:45:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.710 14:45:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:13.710 14:45:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:13.710 14:45:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:13.710 14:45:56 -- target/rpc.sh@11 -- # loops=5 00:09:13.710 14:45:56 -- target/rpc.sh@23 -- # nvmftestinit 00:09:13.710 14:45:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:13.710 14:45:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.710 14:45:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:13.710 14:45:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:13.710 14:45:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:13.710 14:45:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.710 14:45:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:13.710 14:45:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.710 14:45:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:13.710 14:45:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:13.710 14:45:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:13.710 14:45:56 -- common/autotest_common.sh@10 -- # set +x 00:09:21.846 14:46:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:21.846 14:46:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:21.846 14:46:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:21.846 14:46:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:21.846 14:46:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:21.846 14:46:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:21.846 14:46:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:21.846 14:46:02 -- nvmf/common.sh@295 -- # net_devs=() 00:09:21.846 14:46:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:21.846 14:46:02 -- nvmf/common.sh@296 -- # e810=() 00:09:21.846 14:46:02 -- nvmf/common.sh@296 -- # local -ga e810 00:09:21.846 14:46:02 -- nvmf/common.sh@297 -- # x722=() 00:09:21.846 14:46:02 -- nvmf/common.sh@297 -- # local -ga x722 00:09:21.846 14:46:02 -- nvmf/common.sh@298 -- # mlx=() 00:09:21.846 14:46:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:21.846 14:46:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:21.846 14:46:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:21.846 14:46:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:21.846 14:46:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:21.846 14:46:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:21.846 14:46:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:21.846 14:46:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:21.846 14:46:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:21.846 14:46:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:21.846 14:46:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:21.846 14:46:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:21.846 14:46:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:21.846 14:46:02 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:21.846 14:46:02 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:21.846 14:46:02 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:21.846 14:46:02 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:21.846 14:46:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:21.846 14:46:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:21.846 14:46:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:21.846 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:21.846 14:46:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:21.846 14:46:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:21.846 14:46:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.846 14:46:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.846 14:46:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:21.846 14:46:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:21.846 14:46:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:21.846 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:21.846 14:46:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:21.846 14:46:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:21.846 14:46:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.846 14:46:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.846 14:46:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:21.846 14:46:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:21.846 14:46:02 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:21.846 14:46:02 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:21.846 14:46:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:21.846 14:46:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.846 14:46:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:21.846 14:46:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.846 14:46:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:21.846 Found net devices under 0000:31:00.0: cvl_0_0 00:09:21.846 14:46:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.846 14:46:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:21.846 14:46:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.846 14:46:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:21.846 14:46:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.846 14:46:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:21.846 Found net devices under 0000:31:00.1: cvl_0_1 00:09:21.846 14:46:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.846 14:46:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:21.846 14:46:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:21.846 14:46:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:21.846 14:46:02 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:21.846 14:46:02 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:21.846 14:46:02 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:21.846 14:46:02 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:21.846 14:46:02 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:21.846 14:46:02 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:21.846 14:46:02 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:21.846 14:46:02 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:21.846 14:46:02 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:21.846 14:46:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:21.846 14:46:02 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:21.846 14:46:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:21.846 14:46:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:21.846 14:46:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:21.846 14:46:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:21.846 14:46:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:21.846 14:46:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:21.846 14:46:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:21.846 14:46:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:21.846 14:46:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:21.846 14:46:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:21.846 14:46:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:21.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:21.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:09:21.846 00:09:21.846 --- 10.0.0.2 ping statistics --- 00:09:21.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.846 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:09:21.846 14:46:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:21.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:21.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:09:21.846 00:09:21.846 --- 10.0.0.1 ping statistics --- 00:09:21.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.846 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:09:21.846 14:46:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:21.846 14:46:03 -- nvmf/common.sh@411 -- # return 0 00:09:21.846 14:46:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:21.846 14:46:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:21.846 14:46:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:21.846 14:46:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:21.846 14:46:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:21.846 14:46:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:21.846 14:46:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:21.846 14:46:03 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:21.846 14:46:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:21.846 14:46:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:21.846 14:46:03 -- common/autotest_common.sh@10 -- # set +x 00:09:21.846 14:46:03 -- nvmf/common.sh@470 -- # nvmfpid=927031 00:09:21.846 14:46:03 -- nvmf/common.sh@471 -- # waitforlisten 927031 00:09:21.846 14:46:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:21.846 14:46:03 -- common/autotest_common.sh@817 -- # '[' -z 927031 ']' 00:09:21.846 14:46:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.846 14:46:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:21.846 14:46:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.846 14:46:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:21.846 14:46:03 -- common/autotest_common.sh@10 -- # set +x 00:09:21.846 [2024-04-26 14:46:03.410296] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:09:21.846 [2024-04-26 14:46:03.410358] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.846 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.846 [2024-04-26 14:46:03.482913] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:21.846 [2024-04-26 14:46:03.554244] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:21.846 [2024-04-26 14:46:03.554289] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:21.846 [2024-04-26 14:46:03.554298] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:21.846 [2024-04-26 14:46:03.554306] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:21.846 [2024-04-26 14:46:03.554312] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:21.846 [2024-04-26 14:46:03.554468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.846 [2024-04-26 14:46:03.554597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.846 [2024-04-26 14:46:03.554754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.846 [2024-04-26 14:46:03.554755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:21.846 14:46:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:21.846 14:46:04 -- common/autotest_common.sh@850 -- # return 0 00:09:21.846 14:46:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:21.846 14:46:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:21.846 14:46:04 -- common/autotest_common.sh@10 -- # set +x 00:09:21.846 14:46:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.846 14:46:04 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:21.846 14:46:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.846 14:46:04 -- common/autotest_common.sh@10 -- # set +x 00:09:21.846 14:46:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.846 14:46:04 -- target/rpc.sh@26 -- # stats='{ 00:09:21.846 "tick_rate": 2400000000, 00:09:21.846 "poll_groups": [ 00:09:21.846 { 00:09:21.846 "name": "nvmf_tgt_poll_group_0", 00:09:21.846 "admin_qpairs": 0, 00:09:21.847 "io_qpairs": 0, 00:09:21.847 "current_admin_qpairs": 0, 00:09:21.847 "current_io_qpairs": 0, 00:09:21.847 "pending_bdev_io": 0, 00:09:21.847 "completed_nvme_io": 0, 00:09:21.847 "transports": [] 00:09:21.847 }, 00:09:21.847 { 00:09:21.847 "name": "nvmf_tgt_poll_group_1", 00:09:21.847 "admin_qpairs": 0, 00:09:21.847 "io_qpairs": 0, 00:09:21.847 "current_admin_qpairs": 0, 00:09:21.847 "current_io_qpairs": 0, 00:09:21.847 "pending_bdev_io": 0, 00:09:21.847 "completed_nvme_io": 0, 00:09:21.847 "transports": [] 00:09:21.847 }, 00:09:21.847 { 00:09:21.847 "name": "nvmf_tgt_poll_group_2", 00:09:21.847 "admin_qpairs": 0, 00:09:21.847 "io_qpairs": 0, 00:09:21.847 "current_admin_qpairs": 0, 00:09:21.847 "current_io_qpairs": 0, 00:09:21.847 "pending_bdev_io": 0, 00:09:21.847 "completed_nvme_io": 0, 00:09:21.847 "transports": [] 00:09:21.847 }, 00:09:21.847 { 00:09:21.847 "name": "nvmf_tgt_poll_group_3", 00:09:21.847 "admin_qpairs": 0, 00:09:21.847 "io_qpairs": 0, 00:09:21.847 "current_admin_qpairs": 0, 00:09:21.847 "current_io_qpairs": 0, 00:09:21.847 "pending_bdev_io": 0, 00:09:21.847 "completed_nvme_io": 0, 00:09:21.847 "transports": [] 00:09:21.847 } 00:09:21.847 ] 00:09:21.847 }' 00:09:21.847 14:46:04 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:21.847 14:46:04 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:21.847 14:46:04 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:21.847 14:46:04 -- target/rpc.sh@15 -- # wc -l 00:09:21.847 14:46:04 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:21.847 14:46:04 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:21.847 14:46:04 -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:21.847 14:46:04 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:21.847 14:46:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.847 14:46:04 -- common/autotest_common.sh@10 -- # set +x 00:09:21.847 [2024-04-26 14:46:04.347823] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.847 14:46:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.847 14:46:04 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:21.847 14:46:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.847 14:46:04 -- common/autotest_common.sh@10 -- # set +x 00:09:21.847 14:46:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.847 14:46:04 -- target/rpc.sh@33 -- # stats='{ 00:09:21.847 "tick_rate": 2400000000, 00:09:21.847 "poll_groups": [ 00:09:21.847 { 00:09:21.847 "name": "nvmf_tgt_poll_group_0", 00:09:21.847 "admin_qpairs": 0, 00:09:21.847 "io_qpairs": 0, 00:09:21.847 "current_admin_qpairs": 0, 00:09:21.847 "current_io_qpairs": 0, 00:09:21.847 "pending_bdev_io": 0, 00:09:21.847 "completed_nvme_io": 0, 00:09:21.847 "transports": [ 00:09:21.847 { 00:09:21.847 "trtype": "TCP" 00:09:21.847 } 00:09:21.847 ] 00:09:21.847 }, 00:09:21.847 { 00:09:21.847 "name": "nvmf_tgt_poll_group_1", 00:09:21.847 "admin_qpairs": 0, 00:09:21.847 "io_qpairs": 0, 00:09:21.847 "current_admin_qpairs": 0, 00:09:21.847 "current_io_qpairs": 0, 00:09:21.847 "pending_bdev_io": 0, 00:09:21.847 "completed_nvme_io": 0, 00:09:21.847 "transports": [ 00:09:21.847 { 00:09:21.847 "trtype": "TCP" 00:09:21.847 } 00:09:21.847 ] 00:09:21.847 }, 00:09:21.847 { 00:09:21.847 "name": "nvmf_tgt_poll_group_2", 00:09:21.847 "admin_qpairs": 0, 00:09:21.847 "io_qpairs": 0, 00:09:21.847 "current_admin_qpairs": 0, 00:09:21.847 "current_io_qpairs": 0, 00:09:21.847 "pending_bdev_io": 0, 00:09:21.847 "completed_nvme_io": 0, 00:09:21.847 "transports": [ 00:09:21.847 { 00:09:21.847 "trtype": "TCP" 00:09:21.847 } 00:09:21.847 ] 00:09:21.847 }, 00:09:21.847 { 00:09:21.847 "name": "nvmf_tgt_poll_group_3", 00:09:21.847 "admin_qpairs": 0, 00:09:21.847 "io_qpairs": 0, 00:09:21.847 "current_admin_qpairs": 0, 00:09:21.847 "current_io_qpairs": 0, 00:09:21.847 "pending_bdev_io": 0, 00:09:21.847 "completed_nvme_io": 0, 00:09:21.847 "transports": [ 00:09:21.847 { 00:09:21.847 "trtype": "TCP" 00:09:21.847 } 00:09:21.847 ] 00:09:21.847 } 00:09:21.847 ] 00:09:21.847 }' 00:09:21.847 14:46:04 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:21.847 14:46:04 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:21.847 14:46:04 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:21.847 14:46:04 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:21.847 14:46:04 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:21.847 14:46:04 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:21.847 14:46:04 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:21.847 14:46:04 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:21.847 14:46:04 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:21.847 14:46:04 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:21.847 14:46:04 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:21.847 14:46:04 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:21.847 14:46:04 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:21.847 14:46:04 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:21.847 14:46:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.847 14:46:04 -- common/autotest_common.sh@10 -- # set +x 00:09:21.847 Malloc1 00:09:21.847 14:46:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.847 14:46:04 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:21.847 14:46:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.847 14:46:04 -- common/autotest_common.sh@10 -- # set +x 00:09:21.847 14:46:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:21.847 14:46:04 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:21.847 14:46:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:21.847 14:46:04 -- common/autotest_common.sh@10 -- # set +x 00:09:22.112 14:46:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:22.112 14:46:04 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:22.112 14:46:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:22.112 14:46:04 -- common/autotest_common.sh@10 -- # set +x 00:09:22.112 14:46:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:22.112 14:46:04 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.112 14:46:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:22.112 14:46:04 -- common/autotest_common.sh@10 -- # set +x 00:09:22.112 [2024-04-26 14:46:04.539683] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.112 14:46:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:22.112 14:46:04 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:09:22.112 14:46:04 -- common/autotest_common.sh@638 -- # local es=0 00:09:22.112 14:46:04 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:09:22.112 14:46:04 -- common/autotest_common.sh@626 -- # local arg=nvme 00:09:22.112 14:46:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:22.112 14:46:04 -- common/autotest_common.sh@630 -- # type -t nvme 00:09:22.112 14:46:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:22.112 14:46:04 -- common/autotest_common.sh@632 -- # type -P nvme 00:09:22.112 14:46:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:22.112 14:46:04 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:09:22.112 14:46:04 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:09:22.112 14:46:04 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:09:22.113 [2024-04-26 14:46:04.566637] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:09:22.113 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:22.113 could not add new controller: failed to write to nvme-fabrics device 00:09:22.113 14:46:04 -- common/autotest_common.sh@641 -- # es=1 00:09:22.113 14:46:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:22.113 14:46:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:22.113 14:46:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:22.113 14:46:04 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:22.113 14:46:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:22.113 14:46:04 -- common/autotest_common.sh@10 -- # set +x 00:09:22.113 14:46:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:22.113 14:46:04 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:24.026 14:46:06 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:24.026 14:46:06 -- common/autotest_common.sh@1184 -- # local i=0 00:09:24.026 14:46:06 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:24.026 14:46:06 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:24.026 14:46:06 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:25.939 14:46:08 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:25.939 14:46:08 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:25.939 14:46:08 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:25.939 14:46:08 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:25.939 14:46:08 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:25.939 14:46:08 -- common/autotest_common.sh@1194 -- # return 0 00:09:25.939 14:46:08 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:25.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.939 14:46:08 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:25.939 14:46:08 -- common/autotest_common.sh@1205 -- # local i=0 00:09:25.939 14:46:08 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:25.939 14:46:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.939 14:46:08 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:25.939 14:46:08 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.939 14:46:08 -- common/autotest_common.sh@1217 -- # return 0 00:09:25.939 14:46:08 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:25.939 14:46:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:25.939 14:46:08 -- common/autotest_common.sh@10 -- # set +x 00:09:25.939 14:46:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:25.939 14:46:08 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:25.939 14:46:08 -- common/autotest_common.sh@638 -- # local es=0 00:09:25.939 14:46:08 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:25.939 14:46:08 -- common/autotest_common.sh@626 -- # local arg=nvme 00:09:25.939 14:46:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:25.939 14:46:08 -- common/autotest_common.sh@630 -- # type -t nvme 00:09:25.939 14:46:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:25.939 14:46:08 -- common/autotest_common.sh@632 -- # type -P nvme 00:09:25.939 14:46:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:25.939 14:46:08 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:09:25.939 14:46:08 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:09:25.939 14:46:08 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:25.940 [2024-04-26 14:46:08.521883] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:09:25.940 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:25.940 could not add new controller: failed to write to nvme-fabrics device 00:09:25.940 14:46:08 -- common/autotest_common.sh@641 -- # es=1 00:09:25.940 14:46:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:25.940 14:46:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:25.940 14:46:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:25.940 14:46:08 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:25.940 14:46:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:25.940 14:46:08 -- common/autotest_common.sh@10 -- # set +x 00:09:25.940 14:46:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:25.940 14:46:08 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:27.856 14:46:10 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:27.856 14:46:10 -- common/autotest_common.sh@1184 -- # local i=0 00:09:27.856 14:46:10 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:27.856 14:46:10 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:27.856 14:46:10 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:29.861 14:46:12 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:29.861 14:46:12 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:29.861 14:46:12 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:29.861 14:46:12 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:29.861 14:46:12 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:29.861 14:46:12 -- common/autotest_common.sh@1194 -- # return 0 00:09:29.861 14:46:12 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:29.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.861 14:46:12 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:29.861 14:46:12 -- common/autotest_common.sh@1205 -- # local i=0 00:09:29.861 14:46:12 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:29.861 14:46:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.861 14:46:12 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:29.861 14:46:12 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.861 14:46:12 -- common/autotest_common.sh@1217 -- # return 0 00:09:29.861 14:46:12 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:29.861 14:46:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:29.861 14:46:12 -- common/autotest_common.sh@10 -- # set +x 00:09:29.861 14:46:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:29.861 14:46:12 -- target/rpc.sh@81 -- # seq 1 5 00:09:29.861 14:46:12 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:29.861 14:46:12 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:29.861 14:46:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:29.861 14:46:12 -- common/autotest_common.sh@10 -- # set +x 00:09:29.861 14:46:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:29.861 14:46:12 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.861 14:46:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:29.861 14:46:12 -- common/autotest_common.sh@10 -- # set +x 00:09:29.861 [2024-04-26 14:46:12.227145] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.861 14:46:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:29.861 14:46:12 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:29.861 14:46:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:29.861 14:46:12 -- common/autotest_common.sh@10 -- # set +x 00:09:29.861 14:46:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:29.861 14:46:12 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:29.861 14:46:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:29.861 14:46:12 -- common/autotest_common.sh@10 -- # set +x 00:09:29.861 14:46:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:29.861 14:46:12 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:31.254 14:46:13 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:31.254 14:46:13 -- common/autotest_common.sh@1184 -- # local i=0 00:09:31.254 14:46:13 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:31.254 14:46:13 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:31.254 14:46:13 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:33.164 14:46:15 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:33.165 14:46:15 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:33.165 14:46:15 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:33.165 14:46:15 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:33.165 14:46:15 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:33.165 14:46:15 -- common/autotest_common.sh@1194 -- # return 0 00:09:33.165 14:46:15 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:33.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.425 14:46:15 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:33.425 14:46:15 -- common/autotest_common.sh@1205 -- # local i=0 00:09:33.425 14:46:15 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:33.425 14:46:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.425 14:46:15 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:33.425 14:46:15 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.425 14:46:15 -- common/autotest_common.sh@1217 -- # return 0 00:09:33.425 14:46:15 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:33.425 14:46:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.425 14:46:15 -- common/autotest_common.sh@10 -- # set +x 00:09:33.425 14:46:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.425 14:46:15 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:33.425 14:46:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.425 14:46:15 -- common/autotest_common.sh@10 -- # set +x 00:09:33.425 14:46:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.425 14:46:15 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:33.425 14:46:15 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:33.425 14:46:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.425 14:46:15 -- common/autotest_common.sh@10 -- # set +x 00:09:33.425 14:46:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.425 14:46:15 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.425 14:46:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.425 14:46:15 -- common/autotest_common.sh@10 -- # set +x 00:09:33.425 [2024-04-26 14:46:15.937219] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.425 14:46:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.425 14:46:15 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:33.425 14:46:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.425 14:46:15 -- common/autotest_common.sh@10 -- # set +x 00:09:33.425 14:46:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.425 14:46:15 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:33.425 14:46:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.425 14:46:15 -- common/autotest_common.sh@10 -- # set +x 00:09:33.425 14:46:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.425 14:46:15 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:34.809 14:46:17 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:34.809 14:46:17 -- common/autotest_common.sh@1184 -- # local i=0 00:09:34.809 14:46:17 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:34.809 14:46:17 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:34.809 14:46:17 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:37.351 14:46:19 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:37.351 14:46:19 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:37.351 14:46:19 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:37.351 14:46:19 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:37.351 14:46:19 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:37.351 14:46:19 -- common/autotest_common.sh@1194 -- # return 0 00:09:37.351 14:46:19 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:37.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.351 14:46:19 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:37.351 14:46:19 -- common/autotest_common.sh@1205 -- # local i=0 00:09:37.351 14:46:19 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:37.351 14:46:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:37.351 14:46:19 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:37.351 14:46:19 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:37.351 14:46:19 -- common/autotest_common.sh@1217 -- # return 0 00:09:37.351 14:46:19 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:37.351 14:46:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:37.351 14:46:19 -- common/autotest_common.sh@10 -- # set +x 00:09:37.351 14:46:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:37.351 14:46:19 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:37.351 14:46:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:37.351 14:46:19 -- common/autotest_common.sh@10 -- # set +x 00:09:37.351 14:46:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:37.351 14:46:19 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:37.351 14:46:19 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:37.351 14:46:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:37.351 14:46:19 -- common/autotest_common.sh@10 -- # set +x 00:09:37.351 14:46:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:37.351 14:46:19 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:37.351 14:46:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:37.351 14:46:19 -- common/autotest_common.sh@10 -- # set +x 00:09:37.351 [2024-04-26 14:46:19.624980] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.351 14:46:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:37.351 14:46:19 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:37.351 14:46:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:37.351 14:46:19 -- common/autotest_common.sh@10 -- # set +x 00:09:37.351 14:46:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:37.351 14:46:19 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:37.351 14:46:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:37.351 14:46:19 -- common/autotest_common.sh@10 -- # set +x 00:09:37.351 14:46:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:37.351 14:46:19 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:38.734 14:46:21 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:38.734 14:46:21 -- common/autotest_common.sh@1184 -- # local i=0 00:09:38.734 14:46:21 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:38.734 14:46:21 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:38.734 14:46:21 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:40.642 14:46:23 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:40.642 14:46:23 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:40.643 14:46:23 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:40.643 14:46:23 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:40.643 14:46:23 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:40.643 14:46:23 -- common/autotest_common.sh@1194 -- # return 0 00:09:40.643 14:46:23 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:40.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.643 14:46:23 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:40.643 14:46:23 -- common/autotest_common.sh@1205 -- # local i=0 00:09:40.643 14:46:23 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:40.643 14:46:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.643 14:46:23 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.643 14:46:23 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:40.643 14:46:23 -- common/autotest_common.sh@1217 -- # return 0 00:09:40.643 14:46:23 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:40.643 14:46:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:40.643 14:46:23 -- common/autotest_common.sh@10 -- # set +x 00:09:40.643 14:46:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:40.643 14:46:23 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:40.643 14:46:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:40.643 14:46:23 -- common/autotest_common.sh@10 -- # set +x 00:09:40.902 14:46:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:40.902 14:46:23 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:40.902 14:46:23 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:40.902 14:46:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:40.902 14:46:23 -- common/autotest_common.sh@10 -- # set +x 00:09:40.902 14:46:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:40.902 14:46:23 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:40.902 14:46:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:40.902 14:46:23 -- common/autotest_common.sh@10 -- # set +x 00:09:40.902 [2024-04-26 14:46:23.328388] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:40.902 14:46:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:40.902 14:46:23 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:40.902 14:46:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:40.902 14:46:23 -- common/autotest_common.sh@10 -- # set +x 00:09:40.902 14:46:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:40.902 14:46:23 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:40.902 14:46:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:40.902 14:46:23 -- common/autotest_common.sh@10 -- # set +x 00:09:40.902 14:46:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:40.902 14:46:23 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:42.282 14:46:24 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:42.282 14:46:24 -- common/autotest_common.sh@1184 -- # local i=0 00:09:42.282 14:46:24 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:42.282 14:46:24 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:42.282 14:46:24 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:44.823 14:46:26 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:44.823 14:46:26 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:44.823 14:46:26 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:44.823 14:46:26 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:44.823 14:46:26 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:44.823 14:46:26 -- common/autotest_common.sh@1194 -- # return 0 00:09:44.823 14:46:26 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:44.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.823 14:46:26 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:44.823 14:46:26 -- common/autotest_common.sh@1205 -- # local i=0 00:09:44.823 14:46:27 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:44.823 14:46:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:44.823 14:46:27 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:44.823 14:46:27 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:44.823 14:46:27 -- common/autotest_common.sh@1217 -- # return 0 00:09:44.823 14:46:27 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:44.823 14:46:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.823 14:46:27 -- common/autotest_common.sh@10 -- # set +x 00:09:44.823 14:46:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.823 14:46:27 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.823 14:46:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.823 14:46:27 -- common/autotest_common.sh@10 -- # set +x 00:09:44.823 14:46:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.823 14:46:27 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:44.823 14:46:27 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:44.823 14:46:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.823 14:46:27 -- common/autotest_common.sh@10 -- # set +x 00:09:44.823 14:46:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.823 14:46:27 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.823 14:46:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.823 14:46:27 -- common/autotest_common.sh@10 -- # set +x 00:09:44.823 [2024-04-26 14:46:27.072452] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.823 14:46:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.823 14:46:27 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:44.823 14:46:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.823 14:46:27 -- common/autotest_common.sh@10 -- # set +x 00:09:44.823 14:46:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.823 14:46:27 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:44.823 14:46:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.823 14:46:27 -- common/autotest_common.sh@10 -- # set +x 00:09:44.823 14:46:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.823 14:46:27 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:46.228 14:46:28 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:46.228 14:46:28 -- common/autotest_common.sh@1184 -- # local i=0 00:09:46.228 14:46:28 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:46.228 14:46:28 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:46.228 14:46:28 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:48.142 14:46:30 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:48.142 14:46:30 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:48.142 14:46:30 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:48.142 14:46:30 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:48.142 14:46:30 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:48.142 14:46:30 -- common/autotest_common.sh@1194 -- # return 0 00:09:48.142 14:46:30 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:48.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.142 14:46:30 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:48.142 14:46:30 -- common/autotest_common.sh@1205 -- # local i=0 00:09:48.142 14:46:30 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:48.142 14:46:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:48.142 14:46:30 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:48.142 14:46:30 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:48.142 14:46:30 -- common/autotest_common.sh@1217 -- # return 0 00:09:48.142 14:46:30 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:48.142 14:46:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.142 14:46:30 -- common/autotest_common.sh@10 -- # set +x 00:09:48.142 14:46:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.142 14:46:30 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:48.142 14:46:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.142 14:46:30 -- common/autotest_common.sh@10 -- # set +x 00:09:48.142 14:46:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.142 14:46:30 -- target/rpc.sh@99 -- # seq 1 5 00:09:48.142 14:46:30 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:48.142 14:46:30 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:48.142 14:46:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.142 14:46:30 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 14:46:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:30 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:48.403 14:46:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:30 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 [2024-04-26 14:46:30.817352] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.403 14:46:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:30 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:48.403 14:46:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:30 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 14:46:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:30 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:48.403 14:46:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:30 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 14:46:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:30 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.403 14:46:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:30 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 14:46:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:30 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:48.403 14:46:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:30 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 14:46:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:30 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:48.403 14:46:30 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:48.403 14:46:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:30 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 14:46:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:30 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:48.403 14:46:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:30 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 [2024-04-26 14:46:30.881501] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.403 14:46:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:30 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:48.403 14:46:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:30 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 14:46:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:30 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:48.403 14:46:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:30 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 14:46:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:30 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.403 14:46:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:30 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 14:46:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:30 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:48.403 14:46:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:30 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 14:46:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:30 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:48.403 14:46:30 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:48.403 14:46:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:30 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 14:46:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:30 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:48.403 14:46:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:30 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 [2024-04-26 14:46:30.941684] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.403 14:46:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:30 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:48.403 14:46:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:30 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 14:46:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:30 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:48.403 14:46:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:30 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 14:46:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:30 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.403 14:46:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:30 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 14:46:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:30 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:48.403 14:46:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:30 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 14:46:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:30 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:48.403 14:46:30 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:48.403 14:46:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:30 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 14:46:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:30 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:48.403 14:46:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:30 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 [2024-04-26 14:46:30.993831] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.403 14:46:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:30 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:48.403 14:46:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:30 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 14:46:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:31 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:48.403 14:46:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:31 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 14:46:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:31 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.403 14:46:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:31 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 14:46:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:31 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:48.403 14:46:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:31 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 14:46:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:31 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:48.403 14:46:31 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:48.403 14:46:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:31 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 14:46:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:31 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:48.403 14:46:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:31 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 [2024-04-26 14:46:31.041978] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.403 14:46:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:31 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:48.403 14:46:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:31 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 14:46:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:31 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:48.403 14:46:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:31 -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 14:46:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.403 14:46:31 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.403 14:46:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.403 14:46:31 -- common/autotest_common.sh@10 -- # set +x 00:09:48.664 14:46:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.664 14:46:31 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:48.664 14:46:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.664 14:46:31 -- common/autotest_common.sh@10 -- # set +x 00:09:48.664 14:46:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.664 14:46:31 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:48.664 14:46:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.664 14:46:31 -- common/autotest_common.sh@10 -- # set +x 00:09:48.664 14:46:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.664 14:46:31 -- target/rpc.sh@110 -- # stats='{ 00:09:48.664 "tick_rate": 2400000000, 00:09:48.664 "poll_groups": [ 00:09:48.664 { 00:09:48.664 "name": "nvmf_tgt_poll_group_0", 00:09:48.664 "admin_qpairs": 0, 00:09:48.664 "io_qpairs": 224, 00:09:48.664 "current_admin_qpairs": 0, 00:09:48.664 "current_io_qpairs": 0, 00:09:48.664 "pending_bdev_io": 0, 00:09:48.664 "completed_nvme_io": 226, 00:09:48.664 "transports": [ 00:09:48.664 { 00:09:48.664 "trtype": "TCP" 00:09:48.664 } 00:09:48.664 ] 00:09:48.664 }, 00:09:48.664 { 00:09:48.664 "name": "nvmf_tgt_poll_group_1", 00:09:48.664 "admin_qpairs": 1, 00:09:48.664 "io_qpairs": 223, 00:09:48.664 "current_admin_qpairs": 0, 00:09:48.664 "current_io_qpairs": 0, 00:09:48.664 "pending_bdev_io": 0, 00:09:48.664 "completed_nvme_io": 556, 00:09:48.664 "transports": [ 00:09:48.664 { 00:09:48.664 "trtype": "TCP" 00:09:48.664 } 00:09:48.664 ] 00:09:48.664 }, 00:09:48.664 { 00:09:48.664 "name": "nvmf_tgt_poll_group_2", 00:09:48.664 "admin_qpairs": 6, 00:09:48.664 "io_qpairs": 218, 00:09:48.664 "current_admin_qpairs": 0, 00:09:48.664 "current_io_qpairs": 0, 00:09:48.664 "pending_bdev_io": 0, 00:09:48.664 "completed_nvme_io": 233, 00:09:48.664 "transports": [ 00:09:48.664 { 00:09:48.664 "trtype": "TCP" 00:09:48.664 } 00:09:48.664 ] 00:09:48.664 }, 00:09:48.664 { 00:09:48.664 "name": "nvmf_tgt_poll_group_3", 00:09:48.664 "admin_qpairs": 0, 00:09:48.664 "io_qpairs": 224, 00:09:48.664 "current_admin_qpairs": 0, 00:09:48.664 "current_io_qpairs": 0, 00:09:48.664 "pending_bdev_io": 0, 00:09:48.664 "completed_nvme_io": 224, 00:09:48.664 "transports": [ 00:09:48.664 { 00:09:48.664 "trtype": "TCP" 00:09:48.664 } 00:09:48.664 ] 00:09:48.664 } 00:09:48.664 ] 00:09:48.664 }' 00:09:48.664 14:46:31 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:48.664 14:46:31 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:48.664 14:46:31 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:48.664 14:46:31 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:48.664 14:46:31 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:48.664 14:46:31 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:48.664 14:46:31 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:48.664 14:46:31 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:48.664 14:46:31 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:48.664 14:46:31 -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:09:48.664 14:46:31 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:48.664 14:46:31 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:48.664 14:46:31 -- target/rpc.sh@123 -- # nvmftestfini 00:09:48.664 14:46:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:48.664 14:46:31 -- nvmf/common.sh@117 -- # sync 00:09:48.664 14:46:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:48.664 14:46:31 -- nvmf/common.sh@120 -- # set +e 00:09:48.664 14:46:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:48.664 14:46:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:48.664 rmmod nvme_tcp 00:09:48.664 rmmod nvme_fabrics 00:09:48.664 rmmod nvme_keyring 00:09:48.664 14:46:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:48.664 14:46:31 -- nvmf/common.sh@124 -- # set -e 00:09:48.664 14:46:31 -- nvmf/common.sh@125 -- # return 0 00:09:48.664 14:46:31 -- nvmf/common.sh@478 -- # '[' -n 927031 ']' 00:09:48.664 14:46:31 -- nvmf/common.sh@479 -- # killprocess 927031 00:09:48.664 14:46:31 -- common/autotest_common.sh@936 -- # '[' -z 927031 ']' 00:09:48.664 14:46:31 -- common/autotest_common.sh@940 -- # kill -0 927031 00:09:48.664 14:46:31 -- common/autotest_common.sh@941 -- # uname 00:09:48.664 14:46:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:48.664 14:46:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 927031 00:09:48.924 14:46:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:48.924 14:46:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:48.924 14:46:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 927031' 00:09:48.924 killing process with pid 927031 00:09:48.924 14:46:31 -- common/autotest_common.sh@955 -- # kill 927031 00:09:48.924 14:46:31 -- common/autotest_common.sh@960 -- # wait 927031 00:09:48.924 14:46:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:48.924 14:46:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:48.924 14:46:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:48.924 14:46:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:48.924 14:46:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:48.924 14:46:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.924 14:46:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:48.924 14:46:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.466 14:46:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:51.466 00:09:51.466 real 0m37.557s 00:09:51.466 user 1m53.593s 00:09:51.466 sys 0m7.222s 00:09:51.466 14:46:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:51.466 14:46:33 -- common/autotest_common.sh@10 -- # set +x 00:09:51.466 ************************************ 00:09:51.466 END TEST nvmf_rpc 00:09:51.466 ************************************ 00:09:51.466 14:46:33 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:51.466 14:46:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:51.466 14:46:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:51.466 14:46:33 -- common/autotest_common.sh@10 -- # set +x 00:09:51.466 ************************************ 00:09:51.466 START TEST nvmf_invalid 00:09:51.466 ************************************ 00:09:51.466 14:46:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:51.466 * Looking for test storage... 00:09:51.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:51.466 14:46:33 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:51.466 14:46:33 -- nvmf/common.sh@7 -- # uname -s 00:09:51.466 14:46:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:51.466 14:46:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:51.466 14:46:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:51.466 14:46:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:51.466 14:46:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:51.466 14:46:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:51.466 14:46:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:51.466 14:46:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:51.466 14:46:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:51.466 14:46:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:51.466 14:46:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:51.466 14:46:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:51.466 14:46:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:51.466 14:46:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:51.466 14:46:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:51.466 14:46:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:51.466 14:46:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:51.466 14:46:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:51.466 14:46:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:51.466 14:46:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:51.466 14:46:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.466 14:46:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.466 14:46:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.466 14:46:33 -- paths/export.sh@5 -- # export PATH 00:09:51.466 14:46:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.466 14:46:33 -- nvmf/common.sh@47 -- # : 0 00:09:51.466 14:46:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:51.466 14:46:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:51.466 14:46:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:51.466 14:46:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:51.466 14:46:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:51.466 14:46:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:51.466 14:46:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:51.466 14:46:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:51.467 14:46:33 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:51.467 14:46:33 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:51.467 14:46:33 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:51.467 14:46:33 -- target/invalid.sh@14 -- # target=foobar 00:09:51.467 14:46:33 -- target/invalid.sh@16 -- # RANDOM=0 00:09:51.467 14:46:33 -- target/invalid.sh@34 -- # nvmftestinit 00:09:51.467 14:46:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:51.467 14:46:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:51.467 14:46:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:51.467 14:46:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:51.467 14:46:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:51.467 14:46:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.467 14:46:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:51.467 14:46:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.467 14:46:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:51.467 14:46:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:51.467 14:46:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:51.467 14:46:33 -- common/autotest_common.sh@10 -- # set +x 00:09:58.057 14:46:40 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:58.057 14:46:40 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:58.057 14:46:40 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:58.057 14:46:40 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:58.057 14:46:40 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:58.057 14:46:40 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:58.057 14:46:40 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:58.057 14:46:40 -- nvmf/common.sh@295 -- # net_devs=() 00:09:58.057 14:46:40 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:58.057 14:46:40 -- nvmf/common.sh@296 -- # e810=() 00:09:58.057 14:46:40 -- nvmf/common.sh@296 -- # local -ga e810 00:09:58.057 14:46:40 -- nvmf/common.sh@297 -- # x722=() 00:09:58.057 14:46:40 -- nvmf/common.sh@297 -- # local -ga x722 00:09:58.057 14:46:40 -- nvmf/common.sh@298 -- # mlx=() 00:09:58.057 14:46:40 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:58.057 14:46:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:58.057 14:46:40 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:58.057 14:46:40 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:58.057 14:46:40 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:58.057 14:46:40 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:58.057 14:46:40 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:58.057 14:46:40 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:58.057 14:46:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:58.057 14:46:40 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:58.057 14:46:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:58.057 14:46:40 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:58.057 14:46:40 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:58.057 14:46:40 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:58.057 14:46:40 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:58.057 14:46:40 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:58.057 14:46:40 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:58.057 14:46:40 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:58.057 14:46:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:58.057 14:46:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:58.057 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:58.057 14:46:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:58.057 14:46:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:58.057 14:46:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.057 14:46:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.057 14:46:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:58.057 14:46:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:58.057 14:46:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:58.057 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:58.057 14:46:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:58.057 14:46:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:58.057 14:46:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.058 14:46:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.058 14:46:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:58.058 14:46:40 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:58.058 14:46:40 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:58.058 14:46:40 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:58.058 14:46:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:58.058 14:46:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.058 14:46:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:58.058 14:46:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.058 14:46:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:58.058 Found net devices under 0000:31:00.0: cvl_0_0 00:09:58.058 14:46:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.058 14:46:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:58.058 14:46:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.058 14:46:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:58.058 14:46:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.058 14:46:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:58.058 Found net devices under 0000:31:00.1: cvl_0_1 00:09:58.058 14:46:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.058 14:46:40 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:58.058 14:46:40 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:58.058 14:46:40 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:58.058 14:46:40 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:58.058 14:46:40 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:58.058 14:46:40 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:58.058 14:46:40 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:58.058 14:46:40 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:58.058 14:46:40 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:58.058 14:46:40 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:58.058 14:46:40 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:58.058 14:46:40 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:58.058 14:46:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:58.058 14:46:40 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:58.058 14:46:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:58.058 14:46:40 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:58.058 14:46:40 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:58.058 14:46:40 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:58.319 14:46:40 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:58.319 14:46:40 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:58.319 14:46:40 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:58.319 14:46:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:58.319 14:46:40 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:58.319 14:46:40 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:58.319 14:46:40 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:58.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:09:58.319 00:09:58.319 --- 10.0.0.2 ping statistics --- 00:09:58.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.319 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:09:58.319 14:46:40 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:58.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:09:58.319 00:09:58.319 --- 10.0.0.1 ping statistics --- 00:09:58.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.319 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:09:58.319 14:46:40 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.319 14:46:40 -- nvmf/common.sh@411 -- # return 0 00:09:58.319 14:46:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:58.319 14:46:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.319 14:46:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:58.319 14:46:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:58.319 14:46:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.319 14:46:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:58.319 14:46:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:58.319 14:46:40 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:58.319 14:46:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:58.319 14:46:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:58.319 14:46:40 -- common/autotest_common.sh@10 -- # set +x 00:09:58.319 14:46:40 -- nvmf/common.sh@470 -- # nvmfpid=936948 00:09:58.319 14:46:40 -- nvmf/common.sh@471 -- # waitforlisten 936948 00:09:58.319 14:46:40 -- common/autotest_common.sh@817 -- # '[' -z 936948 ']' 00:09:58.319 14:46:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.319 14:46:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:58.319 14:46:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.319 14:46:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:58.319 14:46:40 -- common/autotest_common.sh@10 -- # set +x 00:09:58.319 14:46:40 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:58.319 [2024-04-26 14:46:40.971564] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:09:58.319 [2024-04-26 14:46:40.971616] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.579 EAL: No free 2048 kB hugepages reported on node 1 00:09:58.579 [2024-04-26 14:46:41.038996] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:58.579 [2024-04-26 14:46:41.104299] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.579 [2024-04-26 14:46:41.104338] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.579 [2024-04-26 14:46:41.104347] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.579 [2024-04-26 14:46:41.104355] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.579 [2024-04-26 14:46:41.104362] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.579 [2024-04-26 14:46:41.104544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.579 [2024-04-26 14:46:41.104659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:58.579 [2024-04-26 14:46:41.104815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.579 [2024-04-26 14:46:41.104816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.150 14:46:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:59.150 14:46:41 -- common/autotest_common.sh@850 -- # return 0 00:09:59.150 14:46:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:59.150 14:46:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:59.150 14:46:41 -- common/autotest_common.sh@10 -- # set +x 00:09:59.150 14:46:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.150 14:46:41 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:59.150 14:46:41 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode30491 00:09:59.409 [2024-04-26 14:46:41.900756] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:59.409 14:46:41 -- target/invalid.sh@40 -- # out='request: 00:09:59.409 { 00:09:59.409 "nqn": "nqn.2016-06.io.spdk:cnode30491", 00:09:59.409 "tgt_name": "foobar", 00:09:59.409 "method": "nvmf_create_subsystem", 00:09:59.409 "req_id": 1 00:09:59.409 } 00:09:59.409 Got JSON-RPC error response 00:09:59.409 response: 00:09:59.409 { 00:09:59.409 "code": -32603, 00:09:59.409 "message": "Unable to find target foobar" 00:09:59.409 }' 00:09:59.409 14:46:41 -- target/invalid.sh@41 -- # [[ request: 00:09:59.409 { 00:09:59.409 "nqn": "nqn.2016-06.io.spdk:cnode30491", 00:09:59.409 "tgt_name": "foobar", 00:09:59.409 "method": "nvmf_create_subsystem", 00:09:59.409 "req_id": 1 00:09:59.409 } 00:09:59.409 Got JSON-RPC error response 00:09:59.409 response: 00:09:59.409 { 00:09:59.409 "code": -32603, 00:09:59.409 "message": "Unable to find target foobar" 00:09:59.409 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:59.409 14:46:41 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:59.409 14:46:41 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode27032 00:09:59.670 [2024-04-26 14:46:42.077344] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27032: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:59.670 14:46:42 -- target/invalid.sh@45 -- # out='request: 00:09:59.670 { 00:09:59.670 "nqn": "nqn.2016-06.io.spdk:cnode27032", 00:09:59.670 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:59.670 "method": "nvmf_create_subsystem", 00:09:59.670 "req_id": 1 00:09:59.670 } 00:09:59.670 Got JSON-RPC error response 00:09:59.670 response: 00:09:59.670 { 00:09:59.670 "code": -32602, 00:09:59.670 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:59.670 }' 00:09:59.670 14:46:42 -- target/invalid.sh@46 -- # [[ request: 00:09:59.670 { 00:09:59.670 "nqn": "nqn.2016-06.io.spdk:cnode27032", 00:09:59.670 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:59.670 "method": "nvmf_create_subsystem", 00:09:59.670 "req_id": 1 00:09:59.670 } 00:09:59.670 Got JSON-RPC error response 00:09:59.670 response: 00:09:59.670 { 00:09:59.670 "code": -32602, 00:09:59.670 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:59.670 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:59.670 14:46:42 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:59.670 14:46:42 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode28654 00:09:59.670 [2024-04-26 14:46:42.253944] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28654: invalid model number 'SPDK_Controller' 00:09:59.670 14:46:42 -- target/invalid.sh@50 -- # out='request: 00:09:59.670 { 00:09:59.670 "nqn": "nqn.2016-06.io.spdk:cnode28654", 00:09:59.670 "model_number": "SPDK_Controller\u001f", 00:09:59.670 "method": "nvmf_create_subsystem", 00:09:59.670 "req_id": 1 00:09:59.670 } 00:09:59.670 Got JSON-RPC error response 00:09:59.670 response: 00:09:59.670 { 00:09:59.670 "code": -32602, 00:09:59.670 "message": "Invalid MN SPDK_Controller\u001f" 00:09:59.670 }' 00:09:59.670 14:46:42 -- target/invalid.sh@51 -- # [[ request: 00:09:59.670 { 00:09:59.670 "nqn": "nqn.2016-06.io.spdk:cnode28654", 00:09:59.670 "model_number": "SPDK_Controller\u001f", 00:09:59.670 "method": "nvmf_create_subsystem", 00:09:59.670 "req_id": 1 00:09:59.670 } 00:09:59.670 Got JSON-RPC error response 00:09:59.670 response: 00:09:59.670 { 00:09:59.670 "code": -32602, 00:09:59.670 "message": "Invalid MN SPDK_Controller\u001f" 00:09:59.670 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:59.670 14:46:42 -- target/invalid.sh@54 -- # gen_random_s 21 00:09:59.670 14:46:42 -- target/invalid.sh@19 -- # local length=21 ll 00:09:59.670 14:46:42 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:59.670 14:46:42 -- target/invalid.sh@21 -- # local chars 00:09:59.670 14:46:42 -- target/invalid.sh@22 -- # local string 00:09:59.670 14:46:42 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:59.670 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.670 14:46:42 -- target/invalid.sh@25 -- # printf %x 56 00:09:59.670 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x38' 00:09:59.670 14:46:42 -- target/invalid.sh@25 -- # string+=8 00:09:59.670 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.670 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.670 14:46:42 -- target/invalid.sh@25 -- # printf %x 124 00:09:59.670 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:59.670 14:46:42 -- target/invalid.sh@25 -- # string+='|' 00:09:59.670 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.670 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.670 14:46:42 -- target/invalid.sh@25 -- # printf %x 85 00:09:59.670 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:59.670 14:46:42 -- target/invalid.sh@25 -- # string+=U 00:09:59.670 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.670 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.670 14:46:42 -- target/invalid.sh@25 -- # printf %x 119 00:09:59.670 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:59.670 14:46:42 -- target/invalid.sh@25 -- # string+=w 00:09:59.670 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.670 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.671 14:46:42 -- target/invalid.sh@25 -- # printf %x 106 00:09:59.671 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:59.671 14:46:42 -- target/invalid.sh@25 -- # string+=j 00:09:59.671 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.671 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.671 14:46:42 -- target/invalid.sh@25 -- # printf %x 126 00:09:59.671 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:59.671 14:46:42 -- target/invalid.sh@25 -- # string+='~' 00:09:59.671 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.671 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # printf %x 91 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # string+='[' 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # printf %x 97 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # string+=a 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # printf %x 54 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x36' 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # string+=6 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # printf %x 98 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # string+=b 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # printf %x 59 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # string+=';' 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # printf %x 52 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # string+=4 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # printf %x 112 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # string+=p 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # printf %x 97 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # string+=a 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # printf %x 48 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # string+=0 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # printf %x 50 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # string+=2 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # printf %x 114 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # string+=r 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # printf %x 123 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # string+='{' 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # printf %x 90 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # string+=Z 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # printf %x 40 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # string+='(' 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # printf %x 35 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:59.932 14:46:42 -- target/invalid.sh@25 -- # string+='#' 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.932 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.932 14:46:42 -- target/invalid.sh@28 -- # [[ 8 == \- ]] 00:09:59.932 14:46:42 -- target/invalid.sh@31 -- # echo '8|Uwj~[a6b;4pa02r{Z(#' 00:09:59.932 14:46:42 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '8|Uwj~[a6b;4pa02r{Z(#' nqn.2016-06.io.spdk:cnode12394 00:09:59.932 [2024-04-26 14:46:42.582970] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12394: invalid serial number '8|Uwj~[a6b;4pa02r{Z(#' 00:10:00.193 14:46:42 -- target/invalid.sh@54 -- # out='request: 00:10:00.193 { 00:10:00.193 "nqn": "nqn.2016-06.io.spdk:cnode12394", 00:10:00.193 "serial_number": "8|Uwj~[a6b;4pa02r{Z(#", 00:10:00.193 "method": "nvmf_create_subsystem", 00:10:00.193 "req_id": 1 00:10:00.193 } 00:10:00.193 Got JSON-RPC error response 00:10:00.193 response: 00:10:00.193 { 00:10:00.193 "code": -32602, 00:10:00.193 "message": "Invalid SN 8|Uwj~[a6b;4pa02r{Z(#" 00:10:00.193 }' 00:10:00.193 14:46:42 -- target/invalid.sh@55 -- # [[ request: 00:10:00.193 { 00:10:00.193 "nqn": "nqn.2016-06.io.spdk:cnode12394", 00:10:00.193 "serial_number": "8|Uwj~[a6b;4pa02r{Z(#", 00:10:00.193 "method": "nvmf_create_subsystem", 00:10:00.193 "req_id": 1 00:10:00.193 } 00:10:00.193 Got JSON-RPC error response 00:10:00.193 response: 00:10:00.193 { 00:10:00.193 "code": -32602, 00:10:00.193 "message": "Invalid SN 8|Uwj~[a6b;4pa02r{Z(#" 00:10:00.193 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:00.193 14:46:42 -- target/invalid.sh@58 -- # gen_random_s 41 00:10:00.193 14:46:42 -- target/invalid.sh@19 -- # local length=41 ll 00:10:00.193 14:46:42 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:00.193 14:46:42 -- target/invalid.sh@21 -- # local chars 00:10:00.193 14:46:42 -- target/invalid.sh@22 -- # local string 00:10:00.193 14:46:42 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:00.193 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.193 14:46:42 -- target/invalid.sh@25 -- # printf %x 37 00:10:00.193 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x25' 00:10:00.193 14:46:42 -- target/invalid.sh@25 -- # string+=% 00:10:00.193 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.193 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.193 14:46:42 -- target/invalid.sh@25 -- # printf %x 51 00:10:00.193 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x33' 00:10:00.193 14:46:42 -- target/invalid.sh@25 -- # string+=3 00:10:00.193 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.193 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.193 14:46:42 -- target/invalid.sh@25 -- # printf %x 121 00:10:00.193 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x79' 00:10:00.193 14:46:42 -- target/invalid.sh@25 -- # string+=y 00:10:00.193 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.193 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.193 14:46:42 -- target/invalid.sh@25 -- # printf %x 80 00:10:00.193 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x50' 00:10:00.193 14:46:42 -- target/invalid.sh@25 -- # string+=P 00:10:00.193 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.193 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.193 14:46:42 -- target/invalid.sh@25 -- # printf %x 124 00:10:00.193 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:10:00.193 14:46:42 -- target/invalid.sh@25 -- # string+='|' 00:10:00.193 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.193 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.193 14:46:42 -- target/invalid.sh@25 -- # printf %x 60 00:10:00.193 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:10:00.193 14:46:42 -- target/invalid.sh@25 -- # string+='<' 00:10:00.193 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.193 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.193 14:46:42 -- target/invalid.sh@25 -- # printf %x 47 00:10:00.193 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:10:00.193 14:46:42 -- target/invalid.sh@25 -- # string+=/ 00:10:00.193 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # printf %x 57 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x39' 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # string+=9 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # printf %x 94 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # string+='^' 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # printf %x 117 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x75' 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # string+=u 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # printf %x 44 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # string+=, 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # printf %x 76 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # string+=L 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # printf %x 64 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x40' 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # string+=@ 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # printf %x 90 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # string+=Z 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # printf %x 106 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # string+=j 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # printf %x 108 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # string+=l 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # printf %x 62 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # string+='>' 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # printf %x 127 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # string+=$'\177' 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # printf %x 83 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x53' 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # string+=S 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # printf %x 105 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x69' 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # string+=i 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # printf %x 94 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # string+='^' 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # printf %x 81 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x51' 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # string+=Q 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # printf %x 109 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # string+=m 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # printf %x 104 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x68' 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # string+=h 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # printf %x 91 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # string+='[' 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # printf %x 116 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x74' 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # string+=t 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # printf %x 108 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # string+=l 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # printf %x 88 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x58' 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # string+=X 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # printf %x 101 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x65' 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # string+=e 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # printf %x 104 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x68' 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # string+=h 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # printf %x 60 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # string+='<' 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # printf %x 115 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x73' 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # string+=s 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # printf %x 76 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:10:00.194 14:46:42 -- target/invalid.sh@25 -- # string+=L 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.194 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.455 14:46:42 -- target/invalid.sh@25 -- # printf %x 39 00:10:00.455 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x27' 00:10:00.455 14:46:42 -- target/invalid.sh@25 -- # string+=\' 00:10:00.455 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.455 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.455 14:46:42 -- target/invalid.sh@25 -- # printf %x 80 00:10:00.455 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x50' 00:10:00.455 14:46:42 -- target/invalid.sh@25 -- # string+=P 00:10:00.455 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.455 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.455 14:46:42 -- target/invalid.sh@25 -- # printf %x 54 00:10:00.455 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x36' 00:10:00.455 14:46:42 -- target/invalid.sh@25 -- # string+=6 00:10:00.455 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.455 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.455 14:46:42 -- target/invalid.sh@25 -- # printf %x 126 00:10:00.455 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:00.455 14:46:42 -- target/invalid.sh@25 -- # string+='~' 00:10:00.455 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.455 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.455 14:46:42 -- target/invalid.sh@25 -- # printf %x 65 00:10:00.455 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x41' 00:10:00.455 14:46:42 -- target/invalid.sh@25 -- # string+=A 00:10:00.455 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.455 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.455 14:46:42 -- target/invalid.sh@25 -- # printf %x 114 00:10:00.455 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:00.455 14:46:42 -- target/invalid.sh@25 -- # string+=r 00:10:00.455 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.455 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.455 14:46:42 -- target/invalid.sh@25 -- # printf %x 103 00:10:00.455 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x67' 00:10:00.455 14:46:42 -- target/invalid.sh@25 -- # string+=g 00:10:00.455 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.455 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.455 14:46:42 -- target/invalid.sh@25 -- # printf %x 62 00:10:00.455 14:46:42 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:00.455 14:46:42 -- target/invalid.sh@25 -- # string+='>' 00:10:00.455 14:46:42 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:00.455 14:46:42 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:00.455 14:46:42 -- target/invalid.sh@28 -- # [[ % == \- ]] 00:10:00.455 14:46:42 -- target/invalid.sh@31 -- # echo '%3yP|Si^Qmh[tlXeh' 00:10:00.455 14:46:42 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '%3yP|Si^Qmh[tlXeh' nqn.2016-06.io.spdk:cnode14817 00:10:00.455 [2024-04-26 14:46:43.056517] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14817: invalid model number '%3yP|Si^Qmh[tlXeh' 00:10:00.455 14:46:43 -- target/invalid.sh@58 -- # out='request: 00:10:00.455 { 00:10:00.455 "nqn": "nqn.2016-06.io.spdk:cnode14817", 00:10:00.456 "model_number": "%3yP|\u007fSi^Qmh[tlXeh", 00:10:00.456 "method": "nvmf_create_subsystem", 00:10:00.456 "req_id": 1 00:10:00.456 } 00:10:00.456 Got JSON-RPC error response 00:10:00.456 response: 00:10:00.456 { 00:10:00.456 "code": -32602, 00:10:00.456 "message": "Invalid MN %3yP|\u007fSi^Qmh[tlXeh" 00:10:00.456 }' 00:10:00.456 14:46:43 -- target/invalid.sh@59 -- # [[ request: 00:10:00.456 { 00:10:00.456 "nqn": "nqn.2016-06.io.spdk:cnode14817", 00:10:00.456 "model_number": "%3yP|\u007fSi^Qmh[tlXeh", 00:10:00.456 "method": "nvmf_create_subsystem", 00:10:00.456 "req_id": 1 00:10:00.456 } 00:10:00.456 Got JSON-RPC error response 00:10:00.456 response: 00:10:00.456 { 00:10:00.456 "code": -32602, 00:10:00.456 "message": "Invalid MN %3yP|\u007fSi^Qmh[tlXeh" 00:10:00.456 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:00.456 14:46:43 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:10:00.716 [2024-04-26 14:46:43.225163] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:00.716 14:46:43 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:00.977 14:46:43 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:10:00.977 14:46:43 -- target/invalid.sh@67 -- # echo '' 00:10:00.977 14:46:43 -- target/invalid.sh@67 -- # head -n 1 00:10:00.977 14:46:43 -- target/invalid.sh@67 -- # IP= 00:10:00.977 14:46:43 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:10:00.977 [2024-04-26 14:46:43.578299] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:00.977 14:46:43 -- target/invalid.sh@69 -- # out='request: 00:10:00.977 { 00:10:00.977 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:00.977 "listen_address": { 00:10:00.977 "trtype": "tcp", 00:10:00.977 "traddr": "", 00:10:00.977 "trsvcid": "4421" 00:10:00.977 }, 00:10:00.977 "method": "nvmf_subsystem_remove_listener", 00:10:00.977 "req_id": 1 00:10:00.977 } 00:10:00.977 Got JSON-RPC error response 00:10:00.977 response: 00:10:00.977 { 00:10:00.977 "code": -32602, 00:10:00.977 "message": "Invalid parameters" 00:10:00.977 }' 00:10:00.977 14:46:43 -- target/invalid.sh@70 -- # [[ request: 00:10:00.977 { 00:10:00.977 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:00.977 "listen_address": { 00:10:00.977 "trtype": "tcp", 00:10:00.977 "traddr": "", 00:10:00.977 "trsvcid": "4421" 00:10:00.977 }, 00:10:00.977 "method": "nvmf_subsystem_remove_listener", 00:10:00.977 "req_id": 1 00:10:00.977 } 00:10:00.977 Got JSON-RPC error response 00:10:00.977 response: 00:10:00.977 { 00:10:00.977 "code": -32602, 00:10:00.977 "message": "Invalid parameters" 00:10:00.977 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:00.977 14:46:43 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16620 -i 0 00:10:01.237 [2024-04-26 14:46:43.750792] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16620: invalid cntlid range [0-65519] 00:10:01.237 14:46:43 -- target/invalid.sh@73 -- # out='request: 00:10:01.237 { 00:10:01.237 "nqn": "nqn.2016-06.io.spdk:cnode16620", 00:10:01.237 "min_cntlid": 0, 00:10:01.237 "method": "nvmf_create_subsystem", 00:10:01.237 "req_id": 1 00:10:01.237 } 00:10:01.237 Got JSON-RPC error response 00:10:01.237 response: 00:10:01.237 { 00:10:01.237 "code": -32602, 00:10:01.237 "message": "Invalid cntlid range [0-65519]" 00:10:01.237 }' 00:10:01.237 14:46:43 -- target/invalid.sh@74 -- # [[ request: 00:10:01.237 { 00:10:01.237 "nqn": "nqn.2016-06.io.spdk:cnode16620", 00:10:01.237 "min_cntlid": 0, 00:10:01.237 "method": "nvmf_create_subsystem", 00:10:01.237 "req_id": 1 00:10:01.237 } 00:10:01.237 Got JSON-RPC error response 00:10:01.237 response: 00:10:01.237 { 00:10:01.237 "code": -32602, 00:10:01.237 "message": "Invalid cntlid range [0-65519]" 00:10:01.237 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:01.237 14:46:43 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20117 -i 65520 00:10:01.497 [2024-04-26 14:46:43.923385] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20117: invalid cntlid range [65520-65519] 00:10:01.497 14:46:43 -- target/invalid.sh@75 -- # out='request: 00:10:01.498 { 00:10:01.498 "nqn": "nqn.2016-06.io.spdk:cnode20117", 00:10:01.498 "min_cntlid": 65520, 00:10:01.498 "method": "nvmf_create_subsystem", 00:10:01.498 "req_id": 1 00:10:01.498 } 00:10:01.498 Got JSON-RPC error response 00:10:01.498 response: 00:10:01.498 { 00:10:01.498 "code": -32602, 00:10:01.498 "message": "Invalid cntlid range [65520-65519]" 00:10:01.498 }' 00:10:01.498 14:46:43 -- target/invalid.sh@76 -- # [[ request: 00:10:01.498 { 00:10:01.498 "nqn": "nqn.2016-06.io.spdk:cnode20117", 00:10:01.498 "min_cntlid": 65520, 00:10:01.498 "method": "nvmf_create_subsystem", 00:10:01.498 "req_id": 1 00:10:01.498 } 00:10:01.498 Got JSON-RPC error response 00:10:01.498 response: 00:10:01.498 { 00:10:01.498 "code": -32602, 00:10:01.498 "message": "Invalid cntlid range [65520-65519]" 00:10:01.498 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:01.498 14:46:43 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19613 -I 0 00:10:01.498 [2024-04-26 14:46:44.095956] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19613: invalid cntlid range [1-0] 00:10:01.498 14:46:44 -- target/invalid.sh@77 -- # out='request: 00:10:01.498 { 00:10:01.498 "nqn": "nqn.2016-06.io.spdk:cnode19613", 00:10:01.498 "max_cntlid": 0, 00:10:01.498 "method": "nvmf_create_subsystem", 00:10:01.498 "req_id": 1 00:10:01.498 } 00:10:01.498 Got JSON-RPC error response 00:10:01.498 response: 00:10:01.498 { 00:10:01.498 "code": -32602, 00:10:01.498 "message": "Invalid cntlid range [1-0]" 00:10:01.498 }' 00:10:01.498 14:46:44 -- target/invalid.sh@78 -- # [[ request: 00:10:01.498 { 00:10:01.498 "nqn": "nqn.2016-06.io.spdk:cnode19613", 00:10:01.498 "max_cntlid": 0, 00:10:01.498 "method": "nvmf_create_subsystem", 00:10:01.498 "req_id": 1 00:10:01.498 } 00:10:01.498 Got JSON-RPC error response 00:10:01.498 response: 00:10:01.498 { 00:10:01.498 "code": -32602, 00:10:01.498 "message": "Invalid cntlid range [1-0]" 00:10:01.498 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:01.498 14:46:44 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32267 -I 65520 00:10:01.758 [2024-04-26 14:46:44.260506] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32267: invalid cntlid range [1-65520] 00:10:01.758 14:46:44 -- target/invalid.sh@79 -- # out='request: 00:10:01.758 { 00:10:01.758 "nqn": "nqn.2016-06.io.spdk:cnode32267", 00:10:01.758 "max_cntlid": 65520, 00:10:01.758 "method": "nvmf_create_subsystem", 00:10:01.758 "req_id": 1 00:10:01.758 } 00:10:01.758 Got JSON-RPC error response 00:10:01.758 response: 00:10:01.758 { 00:10:01.758 "code": -32602, 00:10:01.758 "message": "Invalid cntlid range [1-65520]" 00:10:01.758 }' 00:10:01.758 14:46:44 -- target/invalid.sh@80 -- # [[ request: 00:10:01.758 { 00:10:01.758 "nqn": "nqn.2016-06.io.spdk:cnode32267", 00:10:01.758 "max_cntlid": 65520, 00:10:01.758 "method": "nvmf_create_subsystem", 00:10:01.758 "req_id": 1 00:10:01.758 } 00:10:01.758 Got JSON-RPC error response 00:10:01.758 response: 00:10:01.758 { 00:10:01.758 "code": -32602, 00:10:01.758 "message": "Invalid cntlid range [1-65520]" 00:10:01.758 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:01.758 14:46:44 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25420 -i 6 -I 5 00:10:02.017 [2024-04-26 14:46:44.424999] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25420: invalid cntlid range [6-5] 00:10:02.017 14:46:44 -- target/invalid.sh@83 -- # out='request: 00:10:02.017 { 00:10:02.017 "nqn": "nqn.2016-06.io.spdk:cnode25420", 00:10:02.017 "min_cntlid": 6, 00:10:02.017 "max_cntlid": 5, 00:10:02.017 "method": "nvmf_create_subsystem", 00:10:02.017 "req_id": 1 00:10:02.017 } 00:10:02.017 Got JSON-RPC error response 00:10:02.017 response: 00:10:02.017 { 00:10:02.017 "code": -32602, 00:10:02.017 "message": "Invalid cntlid range [6-5]" 00:10:02.017 }' 00:10:02.017 14:46:44 -- target/invalid.sh@84 -- # [[ request: 00:10:02.017 { 00:10:02.017 "nqn": "nqn.2016-06.io.spdk:cnode25420", 00:10:02.017 "min_cntlid": 6, 00:10:02.017 "max_cntlid": 5, 00:10:02.017 "method": "nvmf_create_subsystem", 00:10:02.017 "req_id": 1 00:10:02.017 } 00:10:02.017 Got JSON-RPC error response 00:10:02.017 response: 00:10:02.017 { 00:10:02.017 "code": -32602, 00:10:02.017 "message": "Invalid cntlid range [6-5]" 00:10:02.017 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:02.017 14:46:44 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:10:02.017 14:46:44 -- target/invalid.sh@87 -- # out='request: 00:10:02.017 { 00:10:02.017 "name": "foobar", 00:10:02.017 "method": "nvmf_delete_target", 00:10:02.017 "req_id": 1 00:10:02.017 } 00:10:02.017 Got JSON-RPC error response 00:10:02.017 response: 00:10:02.017 { 00:10:02.017 "code": -32602, 00:10:02.017 "message": "The specified target doesn'\''t exist, cannot delete it." 00:10:02.017 }' 00:10:02.017 14:46:44 -- target/invalid.sh@88 -- # [[ request: 00:10:02.017 { 00:10:02.017 "name": "foobar", 00:10:02.017 "method": "nvmf_delete_target", 00:10:02.017 "req_id": 1 00:10:02.017 } 00:10:02.017 Got JSON-RPC error response 00:10:02.017 response: 00:10:02.017 { 00:10:02.017 "code": -32602, 00:10:02.017 "message": "The specified target doesn't exist, cannot delete it." 00:10:02.017 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:10:02.017 14:46:44 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:10:02.017 14:46:44 -- target/invalid.sh@91 -- # nvmftestfini 00:10:02.017 14:46:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:02.017 14:46:44 -- nvmf/common.sh@117 -- # sync 00:10:02.017 14:46:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:02.017 14:46:44 -- nvmf/common.sh@120 -- # set +e 00:10:02.017 14:46:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:02.017 14:46:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:02.017 rmmod nvme_tcp 00:10:02.017 rmmod nvme_fabrics 00:10:02.017 rmmod nvme_keyring 00:10:02.017 14:46:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:02.018 14:46:44 -- nvmf/common.sh@124 -- # set -e 00:10:02.018 14:46:44 -- nvmf/common.sh@125 -- # return 0 00:10:02.018 14:46:44 -- nvmf/common.sh@478 -- # '[' -n 936948 ']' 00:10:02.018 14:46:44 -- nvmf/common.sh@479 -- # killprocess 936948 00:10:02.018 14:46:44 -- common/autotest_common.sh@936 -- # '[' -z 936948 ']' 00:10:02.018 14:46:44 -- common/autotest_common.sh@940 -- # kill -0 936948 00:10:02.018 14:46:44 -- common/autotest_common.sh@941 -- # uname 00:10:02.018 14:46:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:02.018 14:46:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 936948 00:10:02.278 14:46:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:02.278 14:46:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:02.278 14:46:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 936948' 00:10:02.278 killing process with pid 936948 00:10:02.278 14:46:44 -- common/autotest_common.sh@955 -- # kill 936948 00:10:02.278 14:46:44 -- common/autotest_common.sh@960 -- # wait 936948 00:10:02.278 14:46:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:02.278 14:46:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:02.278 14:46:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:02.278 14:46:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:02.278 14:46:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:02.278 14:46:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.278 14:46:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:02.278 14:46:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.232 14:46:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:04.232 00:10:04.232 real 0m13.146s 00:10:04.232 user 0m18.903s 00:10:04.232 sys 0m6.124s 00:10:04.232 14:46:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:04.232 14:46:46 -- common/autotest_common.sh@10 -- # set +x 00:10:04.232 ************************************ 00:10:04.493 END TEST nvmf_invalid 00:10:04.493 ************************************ 00:10:04.493 14:46:46 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:04.493 14:46:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:04.493 14:46:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:04.493 14:46:46 -- common/autotest_common.sh@10 -- # set +x 00:10:04.493 ************************************ 00:10:04.493 START TEST nvmf_abort 00:10:04.493 ************************************ 00:10:04.493 14:46:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:04.753 * Looking for test storage... 00:10:04.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.753 14:46:47 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.753 14:46:47 -- nvmf/common.sh@7 -- # uname -s 00:10:04.753 14:46:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.753 14:46:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.753 14:46:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.753 14:46:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.753 14:46:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.753 14:46:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.753 14:46:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.753 14:46:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.753 14:46:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.753 14:46:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.753 14:46:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:04.753 14:46:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:04.753 14:46:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.753 14:46:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.754 14:46:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.754 14:46:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.754 14:46:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:04.754 14:46:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.754 14:46:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.754 14:46:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.754 14:46:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.754 14:46:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.754 14:46:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.754 14:46:47 -- paths/export.sh@5 -- # export PATH 00:10:04.754 14:46:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.754 14:46:47 -- nvmf/common.sh@47 -- # : 0 00:10:04.754 14:46:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:04.754 14:46:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:04.754 14:46:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.754 14:46:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.754 14:46:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.754 14:46:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:04.754 14:46:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:04.754 14:46:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:04.754 14:46:47 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:04.754 14:46:47 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:04.754 14:46:47 -- target/abort.sh@14 -- # nvmftestinit 00:10:04.754 14:46:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:04.754 14:46:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.754 14:46:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:04.754 14:46:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:04.754 14:46:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:04.754 14:46:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.754 14:46:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:04.754 14:46:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.754 14:46:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:04.754 14:46:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:04.754 14:46:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:04.754 14:46:47 -- common/autotest_common.sh@10 -- # set +x 00:10:11.341 14:46:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:11.341 14:46:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:11.341 14:46:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:11.341 14:46:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:11.341 14:46:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:11.341 14:46:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:11.341 14:46:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:11.341 14:46:53 -- nvmf/common.sh@295 -- # net_devs=() 00:10:11.341 14:46:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:11.341 14:46:53 -- nvmf/common.sh@296 -- # e810=() 00:10:11.341 14:46:53 -- nvmf/common.sh@296 -- # local -ga e810 00:10:11.341 14:46:53 -- nvmf/common.sh@297 -- # x722=() 00:10:11.341 14:46:53 -- nvmf/common.sh@297 -- # local -ga x722 00:10:11.341 14:46:53 -- nvmf/common.sh@298 -- # mlx=() 00:10:11.341 14:46:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:11.341 14:46:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:11.341 14:46:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:11.341 14:46:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:11.341 14:46:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:11.341 14:46:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:11.341 14:46:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:11.342 14:46:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:11.342 14:46:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:11.342 14:46:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:11.342 14:46:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:11.342 14:46:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:11.342 14:46:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:11.342 14:46:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:11.342 14:46:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:11.342 14:46:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:11.342 14:46:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:11.342 14:46:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:11.342 14:46:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:11.342 14:46:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:11.342 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:11.342 14:46:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:11.342 14:46:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:11.342 14:46:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.342 14:46:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.342 14:46:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:11.342 14:46:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:11.342 14:46:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:11.342 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:11.342 14:46:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:11.342 14:46:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:11.342 14:46:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.342 14:46:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.342 14:46:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:11.342 14:46:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:11.342 14:46:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:11.342 14:46:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:11.342 14:46:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:11.342 14:46:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.342 14:46:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:11.342 14:46:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.342 14:46:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:11.342 Found net devices under 0000:31:00.0: cvl_0_0 00:10:11.342 14:46:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.342 14:46:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:11.342 14:46:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.342 14:46:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:11.342 14:46:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.342 14:46:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:11.342 Found net devices under 0000:31:00.1: cvl_0_1 00:10:11.342 14:46:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.342 14:46:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:11.342 14:46:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:11.342 14:46:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:11.342 14:46:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:11.342 14:46:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:11.342 14:46:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.342 14:46:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:11.342 14:46:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:11.342 14:46:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:11.342 14:46:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:11.342 14:46:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:11.342 14:46:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:11.342 14:46:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:11.342 14:46:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.342 14:46:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:11.342 14:46:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:11.342 14:46:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:11.342 14:46:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:11.621 14:46:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:11.621 14:46:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:11.621 14:46:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:11.621 14:46:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:11.621 14:46:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:11.621 14:46:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:11.621 14:46:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:11.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:10:11.621 00:10:11.621 --- 10.0.0.2 ping statistics --- 00:10:11.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.621 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:10:11.621 14:46:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:11.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:10:11.621 00:10:11.621 --- 10.0.0.1 ping statistics --- 00:10:11.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.621 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:10:11.621 14:46:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.621 14:46:54 -- nvmf/common.sh@411 -- # return 0 00:10:11.621 14:46:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:11.621 14:46:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.621 14:46:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:11.621 14:46:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:11.621 14:46:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.621 14:46:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:11.621 14:46:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:11.621 14:46:54 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:11.621 14:46:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:11.621 14:46:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:11.621 14:46:54 -- common/autotest_common.sh@10 -- # set +x 00:10:11.621 14:46:54 -- nvmf/common.sh@470 -- # nvmfpid=942194 00:10:11.621 14:46:54 -- nvmf/common.sh@471 -- # waitforlisten 942194 00:10:11.621 14:46:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:11.621 14:46:54 -- common/autotest_common.sh@817 -- # '[' -z 942194 ']' 00:10:11.621 14:46:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.621 14:46:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:11.621 14:46:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.621 14:46:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:11.621 14:46:54 -- common/autotest_common.sh@10 -- # set +x 00:10:11.887 [2024-04-26 14:46:54.318939] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:10:11.887 [2024-04-26 14:46:54.318989] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.887 EAL: No free 2048 kB hugepages reported on node 1 00:10:11.887 [2024-04-26 14:46:54.402529] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:11.887 [2024-04-26 14:46:54.475011] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.887 [2024-04-26 14:46:54.475067] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.887 [2024-04-26 14:46:54.475082] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.887 [2024-04-26 14:46:54.475090] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.887 [2024-04-26 14:46:54.475098] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.887 [2024-04-26 14:46:54.475242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:11.887 [2024-04-26 14:46:54.475398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.887 [2024-04-26 14:46:54.475399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.457 14:46:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:12.457 14:46:55 -- common/autotest_common.sh@850 -- # return 0 00:10:12.457 14:46:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:12.457 14:46:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:12.457 14:46:55 -- common/autotest_common.sh@10 -- # set +x 00:10:12.718 14:46:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:12.718 14:46:55 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:12.718 14:46:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:12.718 14:46:55 -- common/autotest_common.sh@10 -- # set +x 00:10:12.718 [2024-04-26 14:46:55.135637] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:12.718 14:46:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:12.718 14:46:55 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:12.718 14:46:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:12.718 14:46:55 -- common/autotest_common.sh@10 -- # set +x 00:10:12.718 Malloc0 00:10:12.718 14:46:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:12.718 14:46:55 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:12.718 14:46:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:12.718 14:46:55 -- common/autotest_common.sh@10 -- # set +x 00:10:12.718 Delay0 00:10:12.718 14:46:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:12.718 14:46:55 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:12.718 14:46:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:12.718 14:46:55 -- common/autotest_common.sh@10 -- # set +x 00:10:12.718 14:46:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:12.718 14:46:55 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:12.718 14:46:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:12.718 14:46:55 -- common/autotest_common.sh@10 -- # set +x 00:10:12.718 14:46:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:12.718 14:46:55 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:12.718 14:46:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:12.718 14:46:55 -- common/autotest_common.sh@10 -- # set +x 00:10:12.718 [2024-04-26 14:46:55.212339] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:12.718 14:46:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:12.718 14:46:55 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:12.718 14:46:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:12.718 14:46:55 -- common/autotest_common.sh@10 -- # set +x 00:10:12.718 14:46:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:12.718 14:46:55 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:12.718 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.718 [2024-04-26 14:46:55.290578] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:15.261 Initializing NVMe Controllers 00:10:15.261 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:15.261 controller IO queue size 128 less than required 00:10:15.261 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:15.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:15.261 Initialization complete. Launching workers. 00:10:15.261 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 35002 00:10:15.261 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 35063, failed to submit 62 00:10:15.261 success 35006, unsuccess 57, failed 0 00:10:15.261 14:46:57 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:15.261 14:46:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:15.261 14:46:57 -- common/autotest_common.sh@10 -- # set +x 00:10:15.261 14:46:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:15.261 14:46:57 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:15.261 14:46:57 -- target/abort.sh@38 -- # nvmftestfini 00:10:15.261 14:46:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:15.261 14:46:57 -- nvmf/common.sh@117 -- # sync 00:10:15.261 14:46:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:15.261 14:46:57 -- nvmf/common.sh@120 -- # set +e 00:10:15.261 14:46:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:15.261 14:46:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:15.261 rmmod nvme_tcp 00:10:15.261 rmmod nvme_fabrics 00:10:15.261 rmmod nvme_keyring 00:10:15.261 14:46:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:15.261 14:46:57 -- nvmf/common.sh@124 -- # set -e 00:10:15.261 14:46:57 -- nvmf/common.sh@125 -- # return 0 00:10:15.261 14:46:57 -- nvmf/common.sh@478 -- # '[' -n 942194 ']' 00:10:15.261 14:46:57 -- nvmf/common.sh@479 -- # killprocess 942194 00:10:15.261 14:46:57 -- common/autotest_common.sh@936 -- # '[' -z 942194 ']' 00:10:15.261 14:46:57 -- common/autotest_common.sh@940 -- # kill -0 942194 00:10:15.261 14:46:57 -- common/autotest_common.sh@941 -- # uname 00:10:15.261 14:46:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:15.261 14:46:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 942194 00:10:15.262 14:46:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:15.262 14:46:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:15.262 14:46:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 942194' 00:10:15.262 killing process with pid 942194 00:10:15.262 14:46:57 -- common/autotest_common.sh@955 -- # kill 942194 00:10:15.262 14:46:57 -- common/autotest_common.sh@960 -- # wait 942194 00:10:15.262 14:46:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:15.262 14:46:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:15.262 14:46:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:15.262 14:46:57 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:15.262 14:46:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:15.262 14:46:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.262 14:46:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:15.262 14:46:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.174 14:46:59 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:17.174 00:10:17.174 real 0m12.615s 00:10:17.174 user 0m13.290s 00:10:17.174 sys 0m5.983s 00:10:17.174 14:46:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:17.174 14:46:59 -- common/autotest_common.sh@10 -- # set +x 00:10:17.174 ************************************ 00:10:17.174 END TEST nvmf_abort 00:10:17.174 ************************************ 00:10:17.174 14:46:59 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:17.174 14:46:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:17.174 14:46:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:17.174 14:46:59 -- common/autotest_common.sh@10 -- # set +x 00:10:17.434 ************************************ 00:10:17.434 START TEST nvmf_ns_hotplug_stress 00:10:17.434 ************************************ 00:10:17.434 14:46:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:17.434 * Looking for test storage... 00:10:17.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.434 14:47:00 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:17.434 14:47:00 -- nvmf/common.sh@7 -- # uname -s 00:10:17.434 14:47:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.434 14:47:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.434 14:47:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.435 14:47:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.435 14:47:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.435 14:47:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.435 14:47:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.435 14:47:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.435 14:47:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.435 14:47:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.435 14:47:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:17.435 14:47:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:17.435 14:47:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.435 14:47:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.435 14:47:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:17.435 14:47:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.435 14:47:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:17.435 14:47:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.435 14:47:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.435 14:47:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.435 14:47:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.435 14:47:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.435 14:47:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.435 14:47:00 -- paths/export.sh@5 -- # export PATH 00:10:17.435 14:47:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.435 14:47:00 -- nvmf/common.sh@47 -- # : 0 00:10:17.435 14:47:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:17.435 14:47:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:17.435 14:47:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.435 14:47:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.435 14:47:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.435 14:47:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:17.435 14:47:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:17.435 14:47:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:17.435 14:47:00 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:17.435 14:47:00 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:10:17.435 14:47:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:17.435 14:47:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.435 14:47:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:17.435 14:47:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:17.435 14:47:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:17.435 14:47:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.435 14:47:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:17.435 14:47:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.435 14:47:00 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:17.435 14:47:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:17.435 14:47:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:17.435 14:47:00 -- common/autotest_common.sh@10 -- # set +x 00:10:25.576 14:47:07 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:25.576 14:47:07 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:25.576 14:47:07 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:25.576 14:47:07 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:25.576 14:47:07 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:25.576 14:47:07 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:25.576 14:47:07 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:25.576 14:47:07 -- nvmf/common.sh@295 -- # net_devs=() 00:10:25.577 14:47:07 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:25.577 14:47:07 -- nvmf/common.sh@296 -- # e810=() 00:10:25.577 14:47:07 -- nvmf/common.sh@296 -- # local -ga e810 00:10:25.577 14:47:07 -- nvmf/common.sh@297 -- # x722=() 00:10:25.577 14:47:07 -- nvmf/common.sh@297 -- # local -ga x722 00:10:25.577 14:47:07 -- nvmf/common.sh@298 -- # mlx=() 00:10:25.577 14:47:07 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:25.577 14:47:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:25.577 14:47:07 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:25.577 14:47:07 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:25.577 14:47:07 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:25.577 14:47:07 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:25.577 14:47:07 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:25.577 14:47:07 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:25.577 14:47:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:25.577 14:47:07 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:25.577 14:47:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:25.577 14:47:07 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:25.577 14:47:07 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:25.577 14:47:07 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:25.577 14:47:07 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:25.577 14:47:07 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:25.577 14:47:07 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:25.577 14:47:07 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:25.577 14:47:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:25.577 14:47:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:25.577 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:25.577 14:47:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:25.577 14:47:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:25.577 14:47:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:25.577 14:47:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:25.577 14:47:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:25.577 14:47:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:25.577 14:47:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:25.577 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:25.577 14:47:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:25.577 14:47:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:25.577 14:47:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:25.577 14:47:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:25.577 14:47:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:25.577 14:47:07 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:25.577 14:47:07 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:25.577 14:47:07 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:25.577 14:47:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:25.577 14:47:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.577 14:47:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:25.577 14:47:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.577 14:47:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:25.577 Found net devices under 0000:31:00.0: cvl_0_0 00:10:25.577 14:47:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.577 14:47:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:25.577 14:47:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.577 14:47:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:25.577 14:47:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.577 14:47:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:25.577 Found net devices under 0000:31:00.1: cvl_0_1 00:10:25.577 14:47:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.577 14:47:07 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:25.577 14:47:07 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:25.577 14:47:07 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:25.577 14:47:07 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:25.577 14:47:07 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:25.577 14:47:07 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:25.577 14:47:07 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:25.577 14:47:07 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:25.577 14:47:07 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:25.577 14:47:07 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:25.577 14:47:07 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:25.577 14:47:07 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:25.577 14:47:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:25.577 14:47:07 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:25.577 14:47:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:25.577 14:47:07 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:25.577 14:47:07 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:25.577 14:47:07 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:25.577 14:47:07 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:25.577 14:47:07 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:25.577 14:47:07 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:25.577 14:47:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:25.577 14:47:07 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:25.577 14:47:07 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:25.577 14:47:07 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:25.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:25.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.429 ms 00:10:25.577 00:10:25.577 --- 10.0.0.2 ping statistics --- 00:10:25.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.577 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:10:25.577 14:47:07 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:25.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:25.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:10:25.577 00:10:25.577 --- 10.0.0.1 ping statistics --- 00:10:25.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.577 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:10:25.577 14:47:07 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:25.577 14:47:07 -- nvmf/common.sh@411 -- # return 0 00:10:25.577 14:47:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:25.577 14:47:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:25.577 14:47:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:25.577 14:47:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:25.577 14:47:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:25.577 14:47:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:25.577 14:47:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:25.577 14:47:07 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:10:25.577 14:47:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:25.577 14:47:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:25.577 14:47:07 -- common/autotest_common.sh@10 -- # set +x 00:10:25.577 14:47:07 -- nvmf/common.sh@470 -- # nvmfpid=946975 00:10:25.577 14:47:07 -- nvmf/common.sh@471 -- # waitforlisten 946975 00:10:25.577 14:47:07 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:25.577 14:47:07 -- common/autotest_common.sh@817 -- # '[' -z 946975 ']' 00:10:25.577 14:47:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.577 14:47:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:25.577 14:47:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.577 14:47:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:25.577 14:47:07 -- common/autotest_common.sh@10 -- # set +x 00:10:25.577 [2024-04-26 14:47:07.471077] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:10:25.577 [2024-04-26 14:47:07.471138] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.577 EAL: No free 2048 kB hugepages reported on node 1 00:10:25.577 [2024-04-26 14:47:07.562445] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:25.577 [2024-04-26 14:47:07.654431] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:25.577 [2024-04-26 14:47:07.654496] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:25.577 [2024-04-26 14:47:07.654505] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:25.577 [2024-04-26 14:47:07.654511] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:25.577 [2024-04-26 14:47:07.654517] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:25.577 [2024-04-26 14:47:07.654665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:25.577 [2024-04-26 14:47:07.654830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.577 [2024-04-26 14:47:07.654831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:25.838 14:47:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:25.838 14:47:08 -- common/autotest_common.sh@850 -- # return 0 00:10:25.838 14:47:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:25.838 14:47:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:25.838 14:47:08 -- common/autotest_common.sh@10 -- # set +x 00:10:25.838 14:47:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:25.838 14:47:08 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:10:25.838 14:47:08 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:25.838 [2024-04-26 14:47:08.432964] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:25.838 14:47:08 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:26.099 14:47:08 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:26.099 [2024-04-26 14:47:08.762428] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:26.360 14:47:08 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:26.360 14:47:08 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:26.620 Malloc0 00:10:26.620 14:47:09 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:26.620 Delay0 00:10:26.620 14:47:09 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.881 14:47:09 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:27.141 NULL1 00:10:27.141 14:47:09 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:27.141 14:47:09 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=947636 00:10:27.141 14:47:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:27.141 14:47:09 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:27.141 14:47:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.401 EAL: No free 2048 kB hugepages reported on node 1 00:10:28.343 Read completed with error (sct=0, sc=11) 00:10:28.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.343 14:47:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.605 14:47:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:10:28.605 14:47:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:28.866 true 00:10:28.866 14:47:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:28.866 14:47:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.810 14:47:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.810 14:47:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:10:29.810 14:47:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:30.070 true 00:10:30.070 14:47:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:30.070 14:47:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.070 14:47:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.331 14:47:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:10:30.331 14:47:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:30.331 true 00:10:30.593 14:47:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:30.593 14:47:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.535 14:47:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.796 14:47:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:10:31.796 14:47:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:32.056 true 00:10:32.056 14:47:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:32.056 14:47:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.999 14:47:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.999 14:47:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:10:32.999 14:47:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:32.999 true 00:10:33.261 14:47:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:33.261 14:47:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.261 14:47:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.522 14:47:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:10:33.522 14:47:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:33.522 true 00:10:33.522 14:47:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:33.522 14:47:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.784 14:47:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.044 14:47:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:10:34.044 14:47:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:34.044 true 00:10:34.044 14:47:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:34.044 14:47:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.305 14:47:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.567 14:47:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:10:34.567 14:47:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:34.567 true 00:10:34.567 14:47:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:34.567 14:47:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.828 14:47:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.089 14:47:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:10:35.089 14:47:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:35.089 true 00:10:35.089 14:47:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:35.089 14:47:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.032 14:47:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.293 14:47:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:10:36.293 14:47:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:36.293 true 00:10:36.293 14:47:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:36.293 14:47:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.553 14:47:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.813 14:47:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:10:36.813 14:47:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:36.813 true 00:10:36.813 14:47:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:36.813 14:47:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.073 14:47:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.073 14:47:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:10:37.073 14:47:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:37.333 true 00:10:37.333 14:47:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:37.333 14:47:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.594 14:47:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.594 14:47:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:10:37.594 14:47:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:37.855 true 00:10:37.855 14:47:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:37.855 14:47:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.116 14:47:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.116 14:47:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:10:38.116 14:47:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:38.376 true 00:10:38.376 14:47:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:38.376 14:47:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.665 14:47:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.665 14:47:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:10:38.665 14:47:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:38.927 true 00:10:38.927 14:47:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:38.927 14:47:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.927 14:47:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.216 14:47:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:10:39.216 14:47:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:39.216 true 00:10:39.216 14:47:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:39.216 14:47:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.155 14:47:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.414 14:47:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:10:40.414 14:47:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:40.414 true 00:10:40.414 14:47:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:40.414 14:47:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.674 14:47:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.933 14:47:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:10:40.933 14:47:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:40.933 true 00:10:40.933 14:47:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:40.933 14:47:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.193 14:47:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.193 14:47:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:10:41.193 14:47:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:41.452 true 00:10:41.452 14:47:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:41.452 14:47:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.391 14:47:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.391 14:47:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:10:42.391 14:47:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:42.652 true 00:10:42.653 14:47:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:42.653 14:47:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.914 14:47:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.914 14:47:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:10:42.914 14:47:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:43.175 true 00:10:43.175 14:47:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:43.175 14:47:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.175 14:47:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.436 14:47:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:10:43.436 14:47:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:43.696 true 00:10:43.696 14:47:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:43.696 14:47:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.696 14:47:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.958 14:47:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:10:43.958 14:47:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:43.958 true 00:10:43.958 14:47:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:43.958 14:47:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.218 14:47:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.480 14:47:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:10:44.480 14:47:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:44.480 true 00:10:44.480 14:47:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:44.480 14:47:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.423 14:47:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.685 14:47:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:10:45.685 14:47:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:45.685 true 00:10:45.685 14:47:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:45.685 14:47:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.944 14:47:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.206 14:47:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:10:46.206 14:47:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:46.206 true 00:10:46.206 14:47:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:46.206 14:47:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.466 14:47:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.466 14:47:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:10:46.466 14:47:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:46.726 true 00:10:46.726 14:47:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:46.726 14:47:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.987 14:47:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.987 14:47:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:10:46.987 14:47:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:47.249 true 00:10:47.249 14:47:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:47.249 14:47:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.510 14:47:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.510 14:47:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:10:47.510 14:47:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:47.771 true 00:10:47.771 14:47:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:47.771 14:47:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.771 14:47:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.032 14:47:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:10:48.032 14:47:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:48.293 true 00:10:48.293 14:47:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:48.293 14:47:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.293 14:47:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.554 14:47:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1031 00:10:48.554 14:47:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:48.815 true 00:10:48.815 14:47:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:48.815 14:47:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.815 14:47:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.076 14:47:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1032 00:10:49.076 14:47:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:49.076 true 00:10:49.336 14:47:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:49.336 14:47:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.336 14:47:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.596 14:47:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1033 00:10:49.596 14:47:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:49.596 true 00:10:49.596 14:47:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:49.596 14:47:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.537 14:47:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.813 14:47:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1034 00:10:50.814 14:47:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:51.073 true 00:10:51.073 14:47:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:51.073 14:47:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.073 14:47:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.333 14:47:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1035 00:10:51.334 14:47:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:51.594 true 00:10:51.594 14:47:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:51.594 14:47:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.594 14:47:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.855 14:47:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1036 00:10:51.855 14:47:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:51.855 true 00:10:52.115 14:47:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:52.115 14:47:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.115 14:47:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.376 14:47:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1037 00:10:52.376 14:47:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:52.376 true 00:10:52.376 14:47:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:52.376 14:47:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.637 14:47:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.898 14:47:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1038 00:10:52.898 14:47:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:52.898 true 00:10:52.898 14:47:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:52.898 14:47:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.158 14:47:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.419 14:47:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1039 00:10:53.419 14:47:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:53.420 true 00:10:53.420 14:47:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:53.420 14:47:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.680 14:47:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.941 14:47:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1040 00:10:53.941 14:47:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:53.941 true 00:10:53.941 14:47:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:53.941 14:47:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.200 14:47:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.200 14:47:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1041 00:10:54.200 14:47:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:54.460 true 00:10:54.460 14:47:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:54.460 14:47:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.721 14:47:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.721 14:47:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1042 00:10:54.721 14:47:37 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:54.982 true 00:10:54.982 14:47:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:54.982 14:47:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.242 14:47:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.242 14:47:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1043 00:10:55.242 14:47:37 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:55.501 true 00:10:55.501 14:47:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:55.501 14:47:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.501 14:47:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.761 14:47:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1044 00:10:55.761 14:47:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:56.021 true 00:10:56.021 14:47:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:56.021 14:47:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.021 14:47:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:56.282 14:47:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1045 00:10:56.282 14:47:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:56.542 true 00:10:56.542 14:47:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:56.542 14:47:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.542 14:47:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:56.811 14:47:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1046 00:10:56.811 14:47:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:56.811 true 00:10:57.071 14:47:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:57.071 14:47:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.071 14:47:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.331 14:47:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1047 00:10:57.331 14:47:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:57.331 true 00:10:57.590 14:47:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:57.590 14:47:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.590 Initializing NVMe Controllers 00:10:57.590 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:57.590 Controller IO queue size 128, less than required. 00:10:57.590 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:57.590 Controller IO queue size 128, less than required. 00:10:57.590 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:57.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:57.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:57.590 Initialization complete. Launching workers. 00:10:57.590 ======================================================== 00:10:57.590 Latency(us) 00:10:57.590 Device Information : IOPS MiB/s Average min max 00:10:57.591 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 692.80 0.34 62061.06 2112.43 1107404.97 00:10:57.591 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9754.06 4.76 13123.09 2469.18 498448.38 00:10:57.591 ======================================================== 00:10:57.591 Total : 10446.86 5.10 16368.49 2112.43 1107404.97 00:10:57.591 00:10:57.591 14:47:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.850 14:47:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1048 00:10:57.850 14:47:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:57.850 true 00:10:57.850 14:47:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 947636 00:10:57.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (947636) - No such process 00:10:57.850 14:47:40 -- target/ns_hotplug_stress.sh@44 -- # wait 947636 00:10:57.850 14:47:40 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:57.850 14:47:40 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:10:57.850 14:47:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:57.850 14:47:40 -- nvmf/common.sh@117 -- # sync 00:10:57.850 14:47:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:57.850 14:47:40 -- nvmf/common.sh@120 -- # set +e 00:10:57.850 14:47:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:57.850 14:47:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:57.850 rmmod nvme_tcp 00:10:58.111 rmmod nvme_fabrics 00:10:58.111 rmmod nvme_keyring 00:10:58.111 14:47:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:58.111 14:47:40 -- nvmf/common.sh@124 -- # set -e 00:10:58.111 14:47:40 -- nvmf/common.sh@125 -- # return 0 00:10:58.111 14:47:40 -- nvmf/common.sh@478 -- # '[' -n 946975 ']' 00:10:58.111 14:47:40 -- nvmf/common.sh@479 -- # killprocess 946975 00:10:58.111 14:47:40 -- common/autotest_common.sh@936 -- # '[' -z 946975 ']' 00:10:58.111 14:47:40 -- common/autotest_common.sh@940 -- # kill -0 946975 00:10:58.111 14:47:40 -- common/autotest_common.sh@941 -- # uname 00:10:58.111 14:47:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:58.111 14:47:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 946975 00:10:58.111 14:47:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:58.111 14:47:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:58.111 14:47:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 946975' 00:10:58.111 killing process with pid 946975 00:10:58.111 14:47:40 -- common/autotest_common.sh@955 -- # kill 946975 00:10:58.111 14:47:40 -- common/autotest_common.sh@960 -- # wait 946975 00:10:58.111 14:47:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:58.111 14:47:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:58.111 14:47:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:58.111 14:47:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:58.111 14:47:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:58.111 14:47:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.111 14:47:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:58.111 14:47:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.652 14:47:42 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:00.652 00:11:00.652 real 0m42.927s 00:11:00.652 user 2m30.837s 00:11:00.652 sys 0m11.468s 00:11:00.652 14:47:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:00.652 14:47:42 -- common/autotest_common.sh@10 -- # set +x 00:11:00.652 ************************************ 00:11:00.652 END TEST nvmf_ns_hotplug_stress 00:11:00.652 ************************************ 00:11:00.652 14:47:42 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:00.652 14:47:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:00.652 14:47:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:00.652 14:47:42 -- common/autotest_common.sh@10 -- # set +x 00:11:00.652 ************************************ 00:11:00.652 START TEST nvmf_connect_stress 00:11:00.652 ************************************ 00:11:00.652 14:47:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:00.652 * Looking for test storage... 00:11:00.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.652 14:47:43 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.652 14:47:43 -- nvmf/common.sh@7 -- # uname -s 00:11:00.652 14:47:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.652 14:47:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.652 14:47:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.652 14:47:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.652 14:47:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.652 14:47:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.652 14:47:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.652 14:47:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.652 14:47:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.652 14:47:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.652 14:47:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:00.652 14:47:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:00.652 14:47:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.652 14:47:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.652 14:47:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.652 14:47:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.652 14:47:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.652 14:47:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.652 14:47:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.652 14:47:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.653 14:47:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.653 14:47:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.653 14:47:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.653 14:47:43 -- paths/export.sh@5 -- # export PATH 00:11:00.653 14:47:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.653 14:47:43 -- nvmf/common.sh@47 -- # : 0 00:11:00.653 14:47:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:00.653 14:47:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:00.653 14:47:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.653 14:47:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.653 14:47:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.653 14:47:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:00.653 14:47:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:00.653 14:47:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:00.653 14:47:43 -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:00.653 14:47:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:00.653 14:47:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.653 14:47:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:00.653 14:47:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:00.653 14:47:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:00.653 14:47:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.653 14:47:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:00.653 14:47:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.653 14:47:43 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:00.653 14:47:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:00.653 14:47:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:00.653 14:47:43 -- common/autotest_common.sh@10 -- # set +x 00:11:08.878 14:47:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:08.878 14:47:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:08.878 14:47:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:08.878 14:47:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:08.878 14:47:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:08.878 14:47:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:08.878 14:47:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:08.878 14:47:50 -- nvmf/common.sh@295 -- # net_devs=() 00:11:08.878 14:47:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:08.878 14:47:50 -- nvmf/common.sh@296 -- # e810=() 00:11:08.878 14:47:50 -- nvmf/common.sh@296 -- # local -ga e810 00:11:08.878 14:47:50 -- nvmf/common.sh@297 -- # x722=() 00:11:08.878 14:47:50 -- nvmf/common.sh@297 -- # local -ga x722 00:11:08.878 14:47:50 -- nvmf/common.sh@298 -- # mlx=() 00:11:08.878 14:47:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:08.878 14:47:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.878 14:47:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.878 14:47:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.878 14:47:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.879 14:47:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.879 14:47:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.879 14:47:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.879 14:47:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.879 14:47:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.879 14:47:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.879 14:47:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.879 14:47:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:08.879 14:47:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:08.879 14:47:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:08.879 14:47:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:08.879 14:47:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:08.879 14:47:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:08.879 14:47:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:08.879 14:47:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:08.879 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:08.879 14:47:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:08.879 14:47:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:08.879 14:47:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.879 14:47:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.879 14:47:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:08.879 14:47:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:08.879 14:47:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:08.879 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:08.879 14:47:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:08.879 14:47:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:08.879 14:47:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.879 14:47:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.879 14:47:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:08.879 14:47:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:08.879 14:47:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:08.879 14:47:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:08.879 14:47:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:08.879 14:47:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.879 14:47:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:08.879 14:47:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.879 14:47:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:08.879 Found net devices under 0000:31:00.0: cvl_0_0 00:11:08.879 14:47:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.879 14:47:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:08.879 14:47:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.879 14:47:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:08.879 14:47:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.879 14:47:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:08.879 Found net devices under 0000:31:00.1: cvl_0_1 00:11:08.879 14:47:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.879 14:47:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:08.879 14:47:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:08.879 14:47:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:08.879 14:47:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:08.879 14:47:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:08.879 14:47:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.879 14:47:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.879 14:47:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:08.879 14:47:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:08.879 14:47:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:08.879 14:47:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:08.879 14:47:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:08.879 14:47:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:08.879 14:47:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.879 14:47:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:08.879 14:47:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:08.879 14:47:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:08.879 14:47:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:08.879 14:47:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:08.879 14:47:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:08.879 14:47:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:08.879 14:47:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:08.879 14:47:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:08.879 14:47:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:08.879 14:47:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:08.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:11:08.879 00:11:08.879 --- 10.0.0.2 ping statistics --- 00:11:08.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.879 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:11:08.879 14:47:50 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:08.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:11:08.879 00:11:08.879 --- 10.0.0.1 ping statistics --- 00:11:08.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.879 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:11:08.879 14:47:50 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.879 14:47:50 -- nvmf/common.sh@411 -- # return 0 00:11:08.879 14:47:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:08.879 14:47:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.879 14:47:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:08.879 14:47:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:08.879 14:47:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.879 14:47:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:08.879 14:47:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:08.879 14:47:50 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:08.879 14:47:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:08.879 14:47:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:08.879 14:47:50 -- common/autotest_common.sh@10 -- # set +x 00:11:08.879 14:47:50 -- nvmf/common.sh@470 -- # nvmfpid=957890 00:11:08.879 14:47:50 -- nvmf/common.sh@471 -- # waitforlisten 957890 00:11:08.879 14:47:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:08.879 14:47:50 -- common/autotest_common.sh@817 -- # '[' -z 957890 ']' 00:11:08.879 14:47:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.879 14:47:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:08.879 14:47:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.879 14:47:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:08.879 14:47:50 -- common/autotest_common.sh@10 -- # set +x 00:11:08.879 [2024-04-26 14:47:50.487190] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:11:08.879 [2024-04-26 14:47:50.487252] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.879 EAL: No free 2048 kB hugepages reported on node 1 00:11:08.879 [2024-04-26 14:47:50.578644] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:08.879 [2024-04-26 14:47:50.672530] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.880 [2024-04-26 14:47:50.672591] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.880 [2024-04-26 14:47:50.672599] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.880 [2024-04-26 14:47:50.672606] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.880 [2024-04-26 14:47:50.672612] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.880 [2024-04-26 14:47:50.672741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.880 [2024-04-26 14:47:50.672893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.880 [2024-04-26 14:47:50.672894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.880 14:47:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:08.880 14:47:51 -- common/autotest_common.sh@850 -- # return 0 00:11:08.880 14:47:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:08.880 14:47:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:08.880 14:47:51 -- common/autotest_common.sh@10 -- # set +x 00:11:08.880 14:47:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.880 14:47:51 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:08.880 14:47:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:08.880 14:47:51 -- common/autotest_common.sh@10 -- # set +x 00:11:08.880 [2024-04-26 14:47:51.315294] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:08.880 14:47:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:08.880 14:47:51 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:08.880 14:47:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:08.880 14:47:51 -- common/autotest_common.sh@10 -- # set +x 00:11:08.880 14:47:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:08.880 14:47:51 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:08.880 14:47:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:08.880 14:47:51 -- common/autotest_common.sh@10 -- # set +x 00:11:08.880 [2024-04-26 14:47:51.339705] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:08.880 14:47:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:08.880 14:47:51 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:08.880 14:47:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:08.880 14:47:51 -- common/autotest_common.sh@10 -- # set +x 00:11:08.880 NULL1 00:11:08.880 14:47:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:08.880 14:47:51 -- target/connect_stress.sh@21 -- # PERF_PID=958240 00:11:08.880 14:47:51 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:08.880 14:47:51 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:08.880 14:47:51 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:08.880 14:47:51 -- target/connect_stress.sh@27 -- # seq 1 20 00:11:08.880 14:47:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.880 14:47:51 -- target/connect_stress.sh@28 -- # cat 00:11:08.880 14:47:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.880 14:47:51 -- target/connect_stress.sh@28 -- # cat 00:11:08.880 14:47:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.880 14:47:51 -- target/connect_stress.sh@28 -- # cat 00:11:08.880 14:47:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.880 14:47:51 -- target/connect_stress.sh@28 -- # cat 00:11:08.880 14:47:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.880 14:47:51 -- target/connect_stress.sh@28 -- # cat 00:11:08.880 EAL: No free 2048 kB hugepages reported on node 1 00:11:08.880 14:47:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.880 14:47:51 -- target/connect_stress.sh@28 -- # cat 00:11:08.880 14:47:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.880 14:47:51 -- target/connect_stress.sh@28 -- # cat 00:11:08.880 14:47:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.880 14:47:51 -- target/connect_stress.sh@28 -- # cat 00:11:08.880 14:47:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.880 14:47:51 -- target/connect_stress.sh@28 -- # cat 00:11:08.880 14:47:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.880 14:47:51 -- target/connect_stress.sh@28 -- # cat 00:11:08.880 14:47:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.880 14:47:51 -- target/connect_stress.sh@28 -- # cat 00:11:08.880 14:47:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.880 14:47:51 -- target/connect_stress.sh@28 -- # cat 00:11:08.880 14:47:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.880 14:47:51 -- target/connect_stress.sh@28 -- # cat 00:11:08.880 14:47:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.880 14:47:51 -- target/connect_stress.sh@28 -- # cat 00:11:08.880 14:47:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.880 14:47:51 -- target/connect_stress.sh@28 -- # cat 00:11:08.880 14:47:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.880 14:47:51 -- target/connect_stress.sh@28 -- # cat 00:11:08.880 14:47:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.880 14:47:51 -- target/connect_stress.sh@28 -- # cat 00:11:08.880 14:47:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.880 14:47:51 -- target/connect_stress.sh@28 -- # cat 00:11:08.880 14:47:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.880 14:47:51 -- target/connect_stress.sh@28 -- # cat 00:11:08.880 14:47:51 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.880 14:47:51 -- target/connect_stress.sh@28 -- # cat 00:11:08.880 14:47:51 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:08.880 14:47:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.880 14:47:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:08.880 14:47:51 -- common/autotest_common.sh@10 -- # set +x 00:11:08.880 [2024-04-26 14:47:51.472714] nvme_tcp.c:1641:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=3 for tqpair=0x8ecd10 00:11:08.880 [2024-04-26 14:47:51.474917] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:11:08.880 [2024-04-26 14:47:51.474949] nvme_ctrlr.c:1186:nvme_ctrlr_shutdown_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Failed to read the CSTS register 00:11:09.140 14:47:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:09.140 14:47:51 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:09.140 14:47:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.140 14:47:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:09.140 14:47:51 -- common/autotest_common.sh@10 -- # set +x 00:11:09.710 14:47:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:09.710 14:47:52 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:09.710 14:47:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.710 14:47:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:09.710 14:47:52 -- common/autotest_common.sh@10 -- # set +x 00:11:09.974 14:47:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:09.974 14:47:52 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:09.974 14:47:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.974 14:47:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:09.974 14:47:52 -- common/autotest_common.sh@10 -- # set +x 00:11:10.235 14:47:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:10.235 14:47:52 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:10.235 14:47:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.235 14:47:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:10.235 14:47:52 -- common/autotest_common.sh@10 -- # set +x 00:11:10.496 14:47:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:10.496 14:47:53 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:10.496 14:47:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.496 14:47:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:10.496 14:47:53 -- common/autotest_common.sh@10 -- # set +x 00:11:10.755 14:47:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:10.755 14:47:53 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:10.755 14:47:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.755 14:47:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:10.755 14:47:53 -- common/autotest_common.sh@10 -- # set +x 00:11:11.325 14:47:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.325 14:47:53 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:11.325 14:47:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.325 14:47:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.325 14:47:53 -- common/autotest_common.sh@10 -- # set +x 00:11:11.584 14:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.584 14:47:54 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:11.584 14:47:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.584 14:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.584 14:47:54 -- common/autotest_common.sh@10 -- # set +x 00:11:11.844 14:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.844 14:47:54 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:11.844 14:47:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.844 14:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.844 14:47:54 -- common/autotest_common.sh@10 -- # set +x 00:11:12.105 14:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:12.105 14:47:54 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:12.105 14:47:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.105 14:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:12.105 14:47:54 -- common/autotest_common.sh@10 -- # set +x 00:11:12.676 14:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:12.676 14:47:55 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:12.676 14:47:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.676 14:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:12.676 14:47:55 -- common/autotest_common.sh@10 -- # set +x 00:11:12.936 14:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:12.936 14:47:55 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:12.936 14:47:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.936 14:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:12.936 14:47:55 -- common/autotest_common.sh@10 -- # set +x 00:11:13.196 14:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:13.196 14:47:55 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:13.196 14:47:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.196 14:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:13.196 14:47:55 -- common/autotest_common.sh@10 -- # set +x 00:11:13.456 14:47:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:13.456 14:47:56 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:13.456 14:47:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.456 14:47:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:13.456 14:47:56 -- common/autotest_common.sh@10 -- # set +x 00:11:13.716 14:47:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:13.716 14:47:56 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:13.716 14:47:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.716 14:47:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:13.716 14:47:56 -- common/autotest_common.sh@10 -- # set +x 00:11:14.286 14:47:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.286 14:47:56 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:14.286 14:47:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.286 14:47:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.286 14:47:56 -- common/autotest_common.sh@10 -- # set +x 00:11:14.546 14:47:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.546 14:47:56 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:14.546 14:47:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.546 14:47:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.546 14:47:56 -- common/autotest_common.sh@10 -- # set +x 00:11:14.805 14:47:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.805 14:47:57 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:14.805 14:47:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.805 14:47:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.805 14:47:57 -- common/autotest_common.sh@10 -- # set +x 00:11:15.065 14:47:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.065 14:47:57 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:15.065 14:47:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.065 14:47:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.065 14:47:57 -- common/autotest_common.sh@10 -- # set +x 00:11:15.325 14:47:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.325 14:47:57 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:15.325 14:47:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.325 14:47:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.325 14:47:57 -- common/autotest_common.sh@10 -- # set +x 00:11:15.894 14:47:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.894 14:47:58 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:15.894 14:47:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.894 14:47:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.894 14:47:58 -- common/autotest_common.sh@10 -- # set +x 00:11:16.153 14:47:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:16.153 14:47:58 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:16.153 14:47:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.153 14:47:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:16.153 14:47:58 -- common/autotest_common.sh@10 -- # set +x 00:11:16.412 14:47:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:16.412 14:47:58 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:16.412 14:47:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.412 14:47:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:16.412 14:47:58 -- common/autotest_common.sh@10 -- # set +x 00:11:16.673 14:47:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:16.673 14:47:59 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:16.673 14:47:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.673 14:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:16.673 14:47:59 -- common/autotest_common.sh@10 -- # set +x 00:11:17.244 14:47:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:17.244 14:47:59 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:17.244 14:47:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.244 14:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:17.244 14:47:59 -- common/autotest_common.sh@10 -- # set +x 00:11:17.504 14:47:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:17.504 14:47:59 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:17.504 14:47:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.504 14:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:17.504 14:47:59 -- common/autotest_common.sh@10 -- # set +x 00:11:17.764 14:48:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:17.764 14:48:00 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:17.764 14:48:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.764 14:48:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:17.764 14:48:00 -- common/autotest_common.sh@10 -- # set +x 00:11:18.025 14:48:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.025 14:48:00 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:18.025 14:48:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.025 14:48:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.025 14:48:00 -- common/autotest_common.sh@10 -- # set +x 00:11:18.285 14:48:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.285 14:48:00 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:18.285 14:48:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.285 14:48:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.285 14:48:00 -- common/autotest_common.sh@10 -- # set +x 00:11:18.857 14:48:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.857 14:48:01 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:18.857 14:48:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.857 14:48:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.857 14:48:01 -- common/autotest_common.sh@10 -- # set +x 00:11:18.857 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:19.117 14:48:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:19.117 14:48:01 -- target/connect_stress.sh@34 -- # kill -0 958240 00:11:19.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (958240) - No such process 00:11:19.117 14:48:01 -- target/connect_stress.sh@38 -- # wait 958240 00:11:19.117 14:48:01 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:19.117 14:48:01 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:19.117 14:48:01 -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:19.117 14:48:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:19.117 14:48:01 -- nvmf/common.sh@117 -- # sync 00:11:19.117 14:48:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:19.117 14:48:01 -- nvmf/common.sh@120 -- # set +e 00:11:19.117 14:48:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:19.117 14:48:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:19.117 rmmod nvme_tcp 00:11:19.117 rmmod nvme_fabrics 00:11:19.117 rmmod nvme_keyring 00:11:19.117 14:48:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:19.117 14:48:01 -- nvmf/common.sh@124 -- # set -e 00:11:19.117 14:48:01 -- nvmf/common.sh@125 -- # return 0 00:11:19.117 14:48:01 -- nvmf/common.sh@478 -- # '[' -n 957890 ']' 00:11:19.117 14:48:01 -- nvmf/common.sh@479 -- # killprocess 957890 00:11:19.117 14:48:01 -- common/autotest_common.sh@936 -- # '[' -z 957890 ']' 00:11:19.117 14:48:01 -- common/autotest_common.sh@940 -- # kill -0 957890 00:11:19.117 14:48:01 -- common/autotest_common.sh@941 -- # uname 00:11:19.117 14:48:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:19.117 14:48:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 957890 00:11:19.117 14:48:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:19.117 14:48:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:19.117 14:48:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 957890' 00:11:19.117 killing process with pid 957890 00:11:19.117 14:48:01 -- common/autotest_common.sh@955 -- # kill 957890 00:11:19.117 14:48:01 -- common/autotest_common.sh@960 -- # wait 957890 00:11:19.117 14:48:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:19.379 14:48:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:19.379 14:48:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:19.379 14:48:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:19.379 14:48:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:19.379 14:48:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.379 14:48:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:19.379 14:48:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.306 14:48:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:21.306 00:11:21.306 real 0m20.834s 00:11:21.306 user 0m41.962s 00:11:21.306 sys 0m8.678s 00:11:21.306 14:48:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:21.306 14:48:03 -- common/autotest_common.sh@10 -- # set +x 00:11:21.306 ************************************ 00:11:21.306 END TEST nvmf_connect_stress 00:11:21.306 ************************************ 00:11:21.306 14:48:03 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:21.306 14:48:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:21.306 14:48:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:21.306 14:48:03 -- common/autotest_common.sh@10 -- # set +x 00:11:21.569 ************************************ 00:11:21.569 START TEST nvmf_fused_ordering 00:11:21.569 ************************************ 00:11:21.569 14:48:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:21.569 * Looking for test storage... 00:11:21.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.569 14:48:04 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.569 14:48:04 -- nvmf/common.sh@7 -- # uname -s 00:11:21.569 14:48:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.569 14:48:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.569 14:48:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.569 14:48:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.569 14:48:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.569 14:48:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.569 14:48:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.569 14:48:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.569 14:48:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.569 14:48:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.569 14:48:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:21.569 14:48:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:21.569 14:48:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.569 14:48:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.569 14:48:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.569 14:48:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.569 14:48:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.569 14:48:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.569 14:48:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.569 14:48:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.569 14:48:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.569 14:48:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.569 14:48:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.569 14:48:04 -- paths/export.sh@5 -- # export PATH 00:11:21.569 14:48:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.569 14:48:04 -- nvmf/common.sh@47 -- # : 0 00:11:21.569 14:48:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:21.569 14:48:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:21.570 14:48:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.570 14:48:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.570 14:48:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.570 14:48:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:21.570 14:48:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:21.570 14:48:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:21.570 14:48:04 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:21.570 14:48:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:21.570 14:48:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.570 14:48:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:21.570 14:48:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:21.570 14:48:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:21.570 14:48:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.570 14:48:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:21.570 14:48:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.570 14:48:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:21.570 14:48:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:21.570 14:48:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:21.570 14:48:04 -- common/autotest_common.sh@10 -- # set +x 00:11:29.708 14:48:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:29.708 14:48:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:29.708 14:48:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:29.708 14:48:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:29.708 14:48:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:29.708 14:48:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:29.708 14:48:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:29.708 14:48:11 -- nvmf/common.sh@295 -- # net_devs=() 00:11:29.708 14:48:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:29.708 14:48:11 -- nvmf/common.sh@296 -- # e810=() 00:11:29.708 14:48:11 -- nvmf/common.sh@296 -- # local -ga e810 00:11:29.708 14:48:11 -- nvmf/common.sh@297 -- # x722=() 00:11:29.708 14:48:11 -- nvmf/common.sh@297 -- # local -ga x722 00:11:29.708 14:48:11 -- nvmf/common.sh@298 -- # mlx=() 00:11:29.708 14:48:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:29.708 14:48:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:29.708 14:48:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:29.709 14:48:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:29.709 14:48:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:29.709 14:48:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:29.709 14:48:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:29.709 14:48:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:29.709 14:48:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:29.709 14:48:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:29.709 14:48:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:29.709 14:48:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:29.709 14:48:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:29.709 14:48:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:29.709 14:48:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:29.709 14:48:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:29.709 14:48:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:29.709 14:48:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:29.709 14:48:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:29.709 14:48:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:29.709 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:29.709 14:48:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:29.709 14:48:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:29.709 14:48:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.709 14:48:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.709 14:48:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:29.709 14:48:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:29.709 14:48:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:29.709 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:29.709 14:48:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:29.709 14:48:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:29.709 14:48:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.709 14:48:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.709 14:48:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:29.709 14:48:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:29.709 14:48:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:29.709 14:48:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:29.709 14:48:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:29.709 14:48:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.709 14:48:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:29.709 14:48:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.709 14:48:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:29.709 Found net devices under 0000:31:00.0: cvl_0_0 00:11:29.709 14:48:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.709 14:48:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:29.709 14:48:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.709 14:48:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:29.709 14:48:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.709 14:48:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:29.709 Found net devices under 0000:31:00.1: cvl_0_1 00:11:29.709 14:48:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.709 14:48:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:29.709 14:48:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:29.709 14:48:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:29.709 14:48:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:29.709 14:48:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:29.709 14:48:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.709 14:48:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:29.709 14:48:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:29.709 14:48:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:29.709 14:48:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:29.709 14:48:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:29.709 14:48:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:29.709 14:48:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:29.709 14:48:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.709 14:48:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:29.709 14:48:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:29.709 14:48:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:29.709 14:48:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:29.709 14:48:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:29.709 14:48:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:29.709 14:48:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:29.709 14:48:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:29.709 14:48:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:29.709 14:48:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:29.709 14:48:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:29.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:11:29.709 00:11:29.709 --- 10.0.0.2 ping statistics --- 00:11:29.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.709 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:11:29.709 14:48:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:29.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:11:29.709 00:11:29.709 --- 10.0.0.1 ping statistics --- 00:11:29.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.709 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:11:29.709 14:48:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.709 14:48:11 -- nvmf/common.sh@411 -- # return 0 00:11:29.709 14:48:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:29.709 14:48:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.709 14:48:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:29.709 14:48:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:29.709 14:48:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.709 14:48:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:29.709 14:48:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:29.709 14:48:11 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:29.709 14:48:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:29.709 14:48:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:29.709 14:48:11 -- common/autotest_common.sh@10 -- # set +x 00:11:29.709 14:48:11 -- nvmf/common.sh@470 -- # nvmfpid=965016 00:11:29.709 14:48:11 -- nvmf/common.sh@471 -- # waitforlisten 965016 00:11:29.709 14:48:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:29.709 14:48:11 -- common/autotest_common.sh@817 -- # '[' -z 965016 ']' 00:11:29.709 14:48:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.709 14:48:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:29.709 14:48:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.709 14:48:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:29.709 14:48:11 -- common/autotest_common.sh@10 -- # set +x 00:11:29.709 [2024-04-26 14:48:11.599040] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:11:29.709 [2024-04-26 14:48:11.599108] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.709 EAL: No free 2048 kB hugepages reported on node 1 00:11:29.709 [2024-04-26 14:48:11.688132] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.709 [2024-04-26 14:48:11.779709] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.709 [2024-04-26 14:48:11.779770] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.709 [2024-04-26 14:48:11.779779] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:29.709 [2024-04-26 14:48:11.779786] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:29.709 [2024-04-26 14:48:11.779793] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.709 [2024-04-26 14:48:11.779820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.971 14:48:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:29.971 14:48:12 -- common/autotest_common.sh@850 -- # return 0 00:11:29.971 14:48:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:29.971 14:48:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:29.971 14:48:12 -- common/autotest_common.sh@10 -- # set +x 00:11:29.971 14:48:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.971 14:48:12 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:29.971 14:48:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.971 14:48:12 -- common/autotest_common.sh@10 -- # set +x 00:11:29.971 [2024-04-26 14:48:12.427405] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.971 14:48:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.971 14:48:12 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:29.971 14:48:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.971 14:48:12 -- common/autotest_common.sh@10 -- # set +x 00:11:29.971 14:48:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.971 14:48:12 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.971 14:48:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.971 14:48:12 -- common/autotest_common.sh@10 -- # set +x 00:11:29.971 [2024-04-26 14:48:12.451651] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.971 14:48:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.971 14:48:12 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:29.971 14:48:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.971 14:48:12 -- common/autotest_common.sh@10 -- # set +x 00:11:29.971 NULL1 00:11:29.971 14:48:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.971 14:48:12 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:29.971 14:48:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.971 14:48:12 -- common/autotest_common.sh@10 -- # set +x 00:11:29.971 14:48:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.971 14:48:12 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:29.971 14:48:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.971 14:48:12 -- common/autotest_common.sh@10 -- # set +x 00:11:29.971 14:48:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.971 14:48:12 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:29.971 [2024-04-26 14:48:12.520470] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:11:29.971 [2024-04-26 14:48:12.520533] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid965264 ] 00:11:29.971 EAL: No free 2048 kB hugepages reported on node 1 00:11:30.542 Attached to nqn.2016-06.io.spdk:cnode1 00:11:30.542 Namespace ID: 1 size: 1GB 00:11:30.542 fused_ordering(0) 00:11:30.542 fused_ordering(1) 00:11:30.542 fused_ordering(2) 00:11:30.542 fused_ordering(3) 00:11:30.542 fused_ordering(4) 00:11:30.542 fused_ordering(5) 00:11:30.542 fused_ordering(6) 00:11:30.542 fused_ordering(7) 00:11:30.542 fused_ordering(8) 00:11:30.542 fused_ordering(9) 00:11:30.542 fused_ordering(10) 00:11:30.542 fused_ordering(11) 00:11:30.542 fused_ordering(12) 00:11:30.542 fused_ordering(13) 00:11:30.542 fused_ordering(14) 00:11:30.542 fused_ordering(15) 00:11:30.542 fused_ordering(16) 00:11:30.543 fused_ordering(17) 00:11:30.543 fused_ordering(18) 00:11:30.543 fused_ordering(19) 00:11:30.543 fused_ordering(20) 00:11:30.543 fused_ordering(21) 00:11:30.543 fused_ordering(22) 00:11:30.543 fused_ordering(23) 00:11:30.543 fused_ordering(24) 00:11:30.543 fused_ordering(25) 00:11:30.543 fused_ordering(26) 00:11:30.543 fused_ordering(27) 00:11:30.543 fused_ordering(28) 00:11:30.543 fused_ordering(29) 00:11:30.543 fused_ordering(30) 00:11:30.543 fused_ordering(31) 00:11:30.543 fused_ordering(32) 00:11:30.543 fused_ordering(33) 00:11:30.543 fused_ordering(34) 00:11:30.543 fused_ordering(35) 00:11:30.543 fused_ordering(36) 00:11:30.543 fused_ordering(37) 00:11:30.543 fused_ordering(38) 00:11:30.543 fused_ordering(39) 00:11:30.543 fused_ordering(40) 00:11:30.543 fused_ordering(41) 00:11:30.543 fused_ordering(42) 00:11:30.543 fused_ordering(43) 00:11:30.543 fused_ordering(44) 00:11:30.543 fused_ordering(45) 00:11:30.543 fused_ordering(46) 00:11:30.543 fused_ordering(47) 00:11:30.543 fused_ordering(48) 00:11:30.543 fused_ordering(49) 00:11:30.543 fused_ordering(50) 00:11:30.543 fused_ordering(51) 00:11:30.543 fused_ordering(52) 00:11:30.543 fused_ordering(53) 00:11:30.543 fused_ordering(54) 00:11:30.543 fused_ordering(55) 00:11:30.543 fused_ordering(56) 00:11:30.543 fused_ordering(57) 00:11:30.543 fused_ordering(58) 00:11:30.543 fused_ordering(59) 00:11:30.543 fused_ordering(60) 00:11:30.543 fused_ordering(61) 00:11:30.543 fused_ordering(62) 00:11:30.543 fused_ordering(63) 00:11:30.543 fused_ordering(64) 00:11:30.543 fused_ordering(65) 00:11:30.543 fused_ordering(66) 00:11:30.543 fused_ordering(67) 00:11:30.543 fused_ordering(68) 00:11:30.543 fused_ordering(69) 00:11:30.543 fused_ordering(70) 00:11:30.543 fused_ordering(71) 00:11:30.543 fused_ordering(72) 00:11:30.543 fused_ordering(73) 00:11:30.543 fused_ordering(74) 00:11:30.543 fused_ordering(75) 00:11:30.543 fused_ordering(76) 00:11:30.543 fused_ordering(77) 00:11:30.543 fused_ordering(78) 00:11:30.543 fused_ordering(79) 00:11:30.543 fused_ordering(80) 00:11:30.543 fused_ordering(81) 00:11:30.543 fused_ordering(82) 00:11:30.543 fused_ordering(83) 00:11:30.543 fused_ordering(84) 00:11:30.543 fused_ordering(85) 00:11:30.543 fused_ordering(86) 00:11:30.543 fused_ordering(87) 00:11:30.543 fused_ordering(88) 00:11:30.543 fused_ordering(89) 00:11:30.543 fused_ordering(90) 00:11:30.543 fused_ordering(91) 00:11:30.543 fused_ordering(92) 00:11:30.543 fused_ordering(93) 00:11:30.543 fused_ordering(94) 00:11:30.543 fused_ordering(95) 00:11:30.543 fused_ordering(96) 00:11:30.543 fused_ordering(97) 00:11:30.543 fused_ordering(98) 00:11:30.543 fused_ordering(99) 00:11:30.543 fused_ordering(100) 00:11:30.543 fused_ordering(101) 00:11:30.543 fused_ordering(102) 00:11:30.543 fused_ordering(103) 00:11:30.543 fused_ordering(104) 00:11:30.543 fused_ordering(105) 00:11:30.543 fused_ordering(106) 00:11:30.543 fused_ordering(107) 00:11:30.543 fused_ordering(108) 00:11:30.543 fused_ordering(109) 00:11:30.543 fused_ordering(110) 00:11:30.543 fused_ordering(111) 00:11:30.543 fused_ordering(112) 00:11:30.543 fused_ordering(113) 00:11:30.543 fused_ordering(114) 00:11:30.543 fused_ordering(115) 00:11:30.543 fused_ordering(116) 00:11:30.543 fused_ordering(117) 00:11:30.543 fused_ordering(118) 00:11:30.543 fused_ordering(119) 00:11:30.543 fused_ordering(120) 00:11:30.543 fused_ordering(121) 00:11:30.543 fused_ordering(122) 00:11:30.543 fused_ordering(123) 00:11:30.543 fused_ordering(124) 00:11:30.543 fused_ordering(125) 00:11:30.543 fused_ordering(126) 00:11:30.543 fused_ordering(127) 00:11:30.543 fused_ordering(128) 00:11:30.543 fused_ordering(129) 00:11:30.543 fused_ordering(130) 00:11:30.543 fused_ordering(131) 00:11:30.543 fused_ordering(132) 00:11:30.543 fused_ordering(133) 00:11:30.543 fused_ordering(134) 00:11:30.543 fused_ordering(135) 00:11:30.543 fused_ordering(136) 00:11:30.543 fused_ordering(137) 00:11:30.543 fused_ordering(138) 00:11:30.543 fused_ordering(139) 00:11:30.543 fused_ordering(140) 00:11:30.543 fused_ordering(141) 00:11:30.543 fused_ordering(142) 00:11:30.543 fused_ordering(143) 00:11:30.543 fused_ordering(144) 00:11:30.543 fused_ordering(145) 00:11:30.543 fused_ordering(146) 00:11:30.543 fused_ordering(147) 00:11:30.543 fused_ordering(148) 00:11:30.543 fused_ordering(149) 00:11:30.543 fused_ordering(150) 00:11:30.543 fused_ordering(151) 00:11:30.543 fused_ordering(152) 00:11:30.543 fused_ordering(153) 00:11:30.543 fused_ordering(154) 00:11:30.543 fused_ordering(155) 00:11:30.543 fused_ordering(156) 00:11:30.543 fused_ordering(157) 00:11:30.543 fused_ordering(158) 00:11:30.543 fused_ordering(159) 00:11:30.543 fused_ordering(160) 00:11:30.543 fused_ordering(161) 00:11:30.543 fused_ordering(162) 00:11:30.543 fused_ordering(163) 00:11:30.543 fused_ordering(164) 00:11:30.543 fused_ordering(165) 00:11:30.543 fused_ordering(166) 00:11:30.543 fused_ordering(167) 00:11:30.543 fused_ordering(168) 00:11:30.543 fused_ordering(169) 00:11:30.543 fused_ordering(170) 00:11:30.543 fused_ordering(171) 00:11:30.543 fused_ordering(172) 00:11:30.543 fused_ordering(173) 00:11:30.543 fused_ordering(174) 00:11:30.543 fused_ordering(175) 00:11:30.543 fused_ordering(176) 00:11:30.543 fused_ordering(177) 00:11:30.543 fused_ordering(178) 00:11:30.543 fused_ordering(179) 00:11:30.543 fused_ordering(180) 00:11:30.543 fused_ordering(181) 00:11:30.543 fused_ordering(182) 00:11:30.543 fused_ordering(183) 00:11:30.543 fused_ordering(184) 00:11:30.543 fused_ordering(185) 00:11:30.543 fused_ordering(186) 00:11:30.543 fused_ordering(187) 00:11:30.543 fused_ordering(188) 00:11:30.543 fused_ordering(189) 00:11:30.543 fused_ordering(190) 00:11:30.543 fused_ordering(191) 00:11:30.543 fused_ordering(192) 00:11:30.543 fused_ordering(193) 00:11:30.543 fused_ordering(194) 00:11:30.543 fused_ordering(195) 00:11:30.543 fused_ordering(196) 00:11:30.543 fused_ordering(197) 00:11:30.543 fused_ordering(198) 00:11:30.543 fused_ordering(199) 00:11:30.543 fused_ordering(200) 00:11:30.543 fused_ordering(201) 00:11:30.543 fused_ordering(202) 00:11:30.543 fused_ordering(203) 00:11:30.543 fused_ordering(204) 00:11:30.543 fused_ordering(205) 00:11:30.803 fused_ordering(206) 00:11:30.803 fused_ordering(207) 00:11:30.803 fused_ordering(208) 00:11:30.803 fused_ordering(209) 00:11:30.803 fused_ordering(210) 00:11:30.803 fused_ordering(211) 00:11:30.803 fused_ordering(212) 00:11:30.803 fused_ordering(213) 00:11:30.803 fused_ordering(214) 00:11:30.803 fused_ordering(215) 00:11:30.803 fused_ordering(216) 00:11:30.803 fused_ordering(217) 00:11:30.803 fused_ordering(218) 00:11:30.803 fused_ordering(219) 00:11:30.803 fused_ordering(220) 00:11:30.803 fused_ordering(221) 00:11:30.803 fused_ordering(222) 00:11:30.803 fused_ordering(223) 00:11:30.803 fused_ordering(224) 00:11:30.803 fused_ordering(225) 00:11:30.803 fused_ordering(226) 00:11:30.803 fused_ordering(227) 00:11:30.803 fused_ordering(228) 00:11:30.803 fused_ordering(229) 00:11:30.803 fused_ordering(230) 00:11:30.803 fused_ordering(231) 00:11:30.803 fused_ordering(232) 00:11:30.803 fused_ordering(233) 00:11:30.803 fused_ordering(234) 00:11:30.803 fused_ordering(235) 00:11:30.803 fused_ordering(236) 00:11:30.803 fused_ordering(237) 00:11:30.803 fused_ordering(238) 00:11:30.803 fused_ordering(239) 00:11:30.803 fused_ordering(240) 00:11:30.803 fused_ordering(241) 00:11:30.803 fused_ordering(242) 00:11:30.803 fused_ordering(243) 00:11:30.803 fused_ordering(244) 00:11:30.803 fused_ordering(245) 00:11:30.803 fused_ordering(246) 00:11:30.803 fused_ordering(247) 00:11:30.803 fused_ordering(248) 00:11:30.803 fused_ordering(249) 00:11:30.803 fused_ordering(250) 00:11:30.803 fused_ordering(251) 00:11:30.803 fused_ordering(252) 00:11:30.803 fused_ordering(253) 00:11:30.803 fused_ordering(254) 00:11:30.803 fused_ordering(255) 00:11:30.803 fused_ordering(256) 00:11:30.803 fused_ordering(257) 00:11:30.803 fused_ordering(258) 00:11:30.803 fused_ordering(259) 00:11:30.803 fused_ordering(260) 00:11:30.803 fused_ordering(261) 00:11:30.803 fused_ordering(262) 00:11:30.803 fused_ordering(263) 00:11:30.803 fused_ordering(264) 00:11:30.803 fused_ordering(265) 00:11:30.803 fused_ordering(266) 00:11:30.803 fused_ordering(267) 00:11:30.803 fused_ordering(268) 00:11:30.803 fused_ordering(269) 00:11:30.803 fused_ordering(270) 00:11:30.803 fused_ordering(271) 00:11:30.803 fused_ordering(272) 00:11:30.803 fused_ordering(273) 00:11:30.803 fused_ordering(274) 00:11:30.803 fused_ordering(275) 00:11:30.803 fused_ordering(276) 00:11:30.803 fused_ordering(277) 00:11:30.803 fused_ordering(278) 00:11:30.803 fused_ordering(279) 00:11:30.803 fused_ordering(280) 00:11:30.803 fused_ordering(281) 00:11:30.803 fused_ordering(282) 00:11:30.803 fused_ordering(283) 00:11:30.803 fused_ordering(284) 00:11:30.803 fused_ordering(285) 00:11:30.803 fused_ordering(286) 00:11:30.803 fused_ordering(287) 00:11:30.803 fused_ordering(288) 00:11:30.803 fused_ordering(289) 00:11:30.803 fused_ordering(290) 00:11:30.803 fused_ordering(291) 00:11:30.803 fused_ordering(292) 00:11:30.803 fused_ordering(293) 00:11:30.803 fused_ordering(294) 00:11:30.803 fused_ordering(295) 00:11:30.803 fused_ordering(296) 00:11:30.803 fused_ordering(297) 00:11:30.803 fused_ordering(298) 00:11:30.803 fused_ordering(299) 00:11:30.803 fused_ordering(300) 00:11:30.803 fused_ordering(301) 00:11:30.803 fused_ordering(302) 00:11:30.803 fused_ordering(303) 00:11:30.803 fused_ordering(304) 00:11:30.803 fused_ordering(305) 00:11:30.803 fused_ordering(306) 00:11:30.803 fused_ordering(307) 00:11:30.803 fused_ordering(308) 00:11:30.803 fused_ordering(309) 00:11:30.803 fused_ordering(310) 00:11:30.803 fused_ordering(311) 00:11:30.803 fused_ordering(312) 00:11:30.803 fused_ordering(313) 00:11:30.803 fused_ordering(314) 00:11:30.803 fused_ordering(315) 00:11:30.803 fused_ordering(316) 00:11:30.803 fused_ordering(317) 00:11:30.803 fused_ordering(318) 00:11:30.803 fused_ordering(319) 00:11:30.803 fused_ordering(320) 00:11:30.803 fused_ordering(321) 00:11:30.803 fused_ordering(322) 00:11:30.803 fused_ordering(323) 00:11:30.803 fused_ordering(324) 00:11:30.803 fused_ordering(325) 00:11:30.803 fused_ordering(326) 00:11:30.803 fused_ordering(327) 00:11:30.803 fused_ordering(328) 00:11:30.803 fused_ordering(329) 00:11:30.803 fused_ordering(330) 00:11:30.803 fused_ordering(331) 00:11:30.803 fused_ordering(332) 00:11:30.803 fused_ordering(333) 00:11:30.803 fused_ordering(334) 00:11:30.803 fused_ordering(335) 00:11:30.803 fused_ordering(336) 00:11:30.803 fused_ordering(337) 00:11:30.803 fused_ordering(338) 00:11:30.803 fused_ordering(339) 00:11:30.803 fused_ordering(340) 00:11:30.803 fused_ordering(341) 00:11:30.803 fused_ordering(342) 00:11:30.803 fused_ordering(343) 00:11:30.803 fused_ordering(344) 00:11:30.803 fused_ordering(345) 00:11:30.803 fused_ordering(346) 00:11:30.803 fused_ordering(347) 00:11:30.803 fused_ordering(348) 00:11:30.803 fused_ordering(349) 00:11:30.803 fused_ordering(350) 00:11:30.803 fused_ordering(351) 00:11:30.803 fused_ordering(352) 00:11:30.803 fused_ordering(353) 00:11:30.803 fused_ordering(354) 00:11:30.803 fused_ordering(355) 00:11:30.803 fused_ordering(356) 00:11:30.803 fused_ordering(357) 00:11:30.803 fused_ordering(358) 00:11:30.803 fused_ordering(359) 00:11:30.803 fused_ordering(360) 00:11:30.803 fused_ordering(361) 00:11:30.803 fused_ordering(362) 00:11:30.803 fused_ordering(363) 00:11:30.803 fused_ordering(364) 00:11:30.803 fused_ordering(365) 00:11:30.803 fused_ordering(366) 00:11:30.803 fused_ordering(367) 00:11:30.803 fused_ordering(368) 00:11:30.803 fused_ordering(369) 00:11:30.803 fused_ordering(370) 00:11:30.803 fused_ordering(371) 00:11:30.803 fused_ordering(372) 00:11:30.803 fused_ordering(373) 00:11:30.803 fused_ordering(374) 00:11:30.803 fused_ordering(375) 00:11:30.803 fused_ordering(376) 00:11:30.803 fused_ordering(377) 00:11:30.803 fused_ordering(378) 00:11:30.803 fused_ordering(379) 00:11:30.803 fused_ordering(380) 00:11:30.804 fused_ordering(381) 00:11:30.804 fused_ordering(382) 00:11:30.804 fused_ordering(383) 00:11:30.804 fused_ordering(384) 00:11:30.804 fused_ordering(385) 00:11:30.804 fused_ordering(386) 00:11:30.804 fused_ordering(387) 00:11:30.804 fused_ordering(388) 00:11:30.804 fused_ordering(389) 00:11:30.804 fused_ordering(390) 00:11:30.804 fused_ordering(391) 00:11:30.804 fused_ordering(392) 00:11:30.804 fused_ordering(393) 00:11:30.804 fused_ordering(394) 00:11:30.804 fused_ordering(395) 00:11:30.804 fused_ordering(396) 00:11:30.804 fused_ordering(397) 00:11:30.804 fused_ordering(398) 00:11:30.804 fused_ordering(399) 00:11:30.804 fused_ordering(400) 00:11:30.804 fused_ordering(401) 00:11:30.804 fused_ordering(402) 00:11:30.804 fused_ordering(403) 00:11:30.804 fused_ordering(404) 00:11:30.804 fused_ordering(405) 00:11:30.804 fused_ordering(406) 00:11:30.804 fused_ordering(407) 00:11:30.804 fused_ordering(408) 00:11:30.804 fused_ordering(409) 00:11:30.804 fused_ordering(410) 00:11:31.064 fused_ordering(411) 00:11:31.064 fused_ordering(412) 00:11:31.064 fused_ordering(413) 00:11:31.064 fused_ordering(414) 00:11:31.064 fused_ordering(415) 00:11:31.064 fused_ordering(416) 00:11:31.064 fused_ordering(417) 00:11:31.064 fused_ordering(418) 00:11:31.064 fused_ordering(419) 00:11:31.064 fused_ordering(420) 00:11:31.064 fused_ordering(421) 00:11:31.064 fused_ordering(422) 00:11:31.064 fused_ordering(423) 00:11:31.064 fused_ordering(424) 00:11:31.064 fused_ordering(425) 00:11:31.064 fused_ordering(426) 00:11:31.064 fused_ordering(427) 00:11:31.064 fused_ordering(428) 00:11:31.064 fused_ordering(429) 00:11:31.064 fused_ordering(430) 00:11:31.064 fused_ordering(431) 00:11:31.064 fused_ordering(432) 00:11:31.064 fused_ordering(433) 00:11:31.064 fused_ordering(434) 00:11:31.064 fused_ordering(435) 00:11:31.064 fused_ordering(436) 00:11:31.064 fused_ordering(437) 00:11:31.064 fused_ordering(438) 00:11:31.064 fused_ordering(439) 00:11:31.064 fused_ordering(440) 00:11:31.064 fused_ordering(441) 00:11:31.064 fused_ordering(442) 00:11:31.064 fused_ordering(443) 00:11:31.064 fused_ordering(444) 00:11:31.064 fused_ordering(445) 00:11:31.064 fused_ordering(446) 00:11:31.064 fused_ordering(447) 00:11:31.064 fused_ordering(448) 00:11:31.064 fused_ordering(449) 00:11:31.064 fused_ordering(450) 00:11:31.064 fused_ordering(451) 00:11:31.064 fused_ordering(452) 00:11:31.064 fused_ordering(453) 00:11:31.064 fused_ordering(454) 00:11:31.064 fused_ordering(455) 00:11:31.064 fused_ordering(456) 00:11:31.064 fused_ordering(457) 00:11:31.064 fused_ordering(458) 00:11:31.064 fused_ordering(459) 00:11:31.064 fused_ordering(460) 00:11:31.064 fused_ordering(461) 00:11:31.064 fused_ordering(462) 00:11:31.064 fused_ordering(463) 00:11:31.064 fused_ordering(464) 00:11:31.064 fused_ordering(465) 00:11:31.064 fused_ordering(466) 00:11:31.064 fused_ordering(467) 00:11:31.064 fused_ordering(468) 00:11:31.064 fused_ordering(469) 00:11:31.064 fused_ordering(470) 00:11:31.064 fused_ordering(471) 00:11:31.064 fused_ordering(472) 00:11:31.064 fused_ordering(473) 00:11:31.064 fused_ordering(474) 00:11:31.064 fused_ordering(475) 00:11:31.064 fused_ordering(476) 00:11:31.064 fused_ordering(477) 00:11:31.064 fused_ordering(478) 00:11:31.064 fused_ordering(479) 00:11:31.064 fused_ordering(480) 00:11:31.064 fused_ordering(481) 00:11:31.064 fused_ordering(482) 00:11:31.064 fused_ordering(483) 00:11:31.064 fused_ordering(484) 00:11:31.064 fused_ordering(485) 00:11:31.064 fused_ordering(486) 00:11:31.064 fused_ordering(487) 00:11:31.064 fused_ordering(488) 00:11:31.064 fused_ordering(489) 00:11:31.064 fused_ordering(490) 00:11:31.064 fused_ordering(491) 00:11:31.064 fused_ordering(492) 00:11:31.064 fused_ordering(493) 00:11:31.064 fused_ordering(494) 00:11:31.064 fused_ordering(495) 00:11:31.064 fused_ordering(496) 00:11:31.064 fused_ordering(497) 00:11:31.064 fused_ordering(498) 00:11:31.064 fused_ordering(499) 00:11:31.064 fused_ordering(500) 00:11:31.064 fused_ordering(501) 00:11:31.064 fused_ordering(502) 00:11:31.064 fused_ordering(503) 00:11:31.064 fused_ordering(504) 00:11:31.064 fused_ordering(505) 00:11:31.064 fused_ordering(506) 00:11:31.064 fused_ordering(507) 00:11:31.064 fused_ordering(508) 00:11:31.064 fused_ordering(509) 00:11:31.064 fused_ordering(510) 00:11:31.064 fused_ordering(511) 00:11:31.064 fused_ordering(512) 00:11:31.064 fused_ordering(513) 00:11:31.064 fused_ordering(514) 00:11:31.064 fused_ordering(515) 00:11:31.064 fused_ordering(516) 00:11:31.064 fused_ordering(517) 00:11:31.064 fused_ordering(518) 00:11:31.064 fused_ordering(519) 00:11:31.064 fused_ordering(520) 00:11:31.064 fused_ordering(521) 00:11:31.064 fused_ordering(522) 00:11:31.064 fused_ordering(523) 00:11:31.064 fused_ordering(524) 00:11:31.064 fused_ordering(525) 00:11:31.064 fused_ordering(526) 00:11:31.064 fused_ordering(527) 00:11:31.064 fused_ordering(528) 00:11:31.064 fused_ordering(529) 00:11:31.064 fused_ordering(530) 00:11:31.064 fused_ordering(531) 00:11:31.064 fused_ordering(532) 00:11:31.064 fused_ordering(533) 00:11:31.064 fused_ordering(534) 00:11:31.064 fused_ordering(535) 00:11:31.064 fused_ordering(536) 00:11:31.064 fused_ordering(537) 00:11:31.064 fused_ordering(538) 00:11:31.064 fused_ordering(539) 00:11:31.064 fused_ordering(540) 00:11:31.064 fused_ordering(541) 00:11:31.064 fused_ordering(542) 00:11:31.064 fused_ordering(543) 00:11:31.064 fused_ordering(544) 00:11:31.064 fused_ordering(545) 00:11:31.064 fused_ordering(546) 00:11:31.064 fused_ordering(547) 00:11:31.064 fused_ordering(548) 00:11:31.064 fused_ordering(549) 00:11:31.064 fused_ordering(550) 00:11:31.064 fused_ordering(551) 00:11:31.064 fused_ordering(552) 00:11:31.064 fused_ordering(553) 00:11:31.064 fused_ordering(554) 00:11:31.064 fused_ordering(555) 00:11:31.064 fused_ordering(556) 00:11:31.064 fused_ordering(557) 00:11:31.064 fused_ordering(558) 00:11:31.064 fused_ordering(559) 00:11:31.064 fused_ordering(560) 00:11:31.064 fused_ordering(561) 00:11:31.064 fused_ordering(562) 00:11:31.064 fused_ordering(563) 00:11:31.064 fused_ordering(564) 00:11:31.064 fused_ordering(565) 00:11:31.064 fused_ordering(566) 00:11:31.064 fused_ordering(567) 00:11:31.064 fused_ordering(568) 00:11:31.064 fused_ordering(569) 00:11:31.064 fused_ordering(570) 00:11:31.064 fused_ordering(571) 00:11:31.064 fused_ordering(572) 00:11:31.064 fused_ordering(573) 00:11:31.064 fused_ordering(574) 00:11:31.064 fused_ordering(575) 00:11:31.064 fused_ordering(576) 00:11:31.064 fused_ordering(577) 00:11:31.064 fused_ordering(578) 00:11:31.064 fused_ordering(579) 00:11:31.064 fused_ordering(580) 00:11:31.064 fused_ordering(581) 00:11:31.064 fused_ordering(582) 00:11:31.064 fused_ordering(583) 00:11:31.064 fused_ordering(584) 00:11:31.064 fused_ordering(585) 00:11:31.064 fused_ordering(586) 00:11:31.064 fused_ordering(587) 00:11:31.064 fused_ordering(588) 00:11:31.064 fused_ordering(589) 00:11:31.064 fused_ordering(590) 00:11:31.064 fused_ordering(591) 00:11:31.064 fused_ordering(592) 00:11:31.064 fused_ordering(593) 00:11:31.064 fused_ordering(594) 00:11:31.064 fused_ordering(595) 00:11:31.064 fused_ordering(596) 00:11:31.064 fused_ordering(597) 00:11:31.064 fused_ordering(598) 00:11:31.064 fused_ordering(599) 00:11:31.064 fused_ordering(600) 00:11:31.064 fused_ordering(601) 00:11:31.064 fused_ordering(602) 00:11:31.064 fused_ordering(603) 00:11:31.064 fused_ordering(604) 00:11:31.064 fused_ordering(605) 00:11:31.064 fused_ordering(606) 00:11:31.064 fused_ordering(607) 00:11:31.064 fused_ordering(608) 00:11:31.064 fused_ordering(609) 00:11:31.064 fused_ordering(610) 00:11:31.064 fused_ordering(611) 00:11:31.064 fused_ordering(612) 00:11:31.064 fused_ordering(613) 00:11:31.064 fused_ordering(614) 00:11:31.064 fused_ordering(615) 00:11:31.634 fused_ordering(616) 00:11:31.634 fused_ordering(617) 00:11:31.634 fused_ordering(618) 00:11:31.634 fused_ordering(619) 00:11:31.634 fused_ordering(620) 00:11:31.634 fused_ordering(621) 00:11:31.634 fused_ordering(622) 00:11:31.634 fused_ordering(623) 00:11:31.634 fused_ordering(624) 00:11:31.634 fused_ordering(625) 00:11:31.634 fused_ordering(626) 00:11:31.634 fused_ordering(627) 00:11:31.634 fused_ordering(628) 00:11:31.634 fused_ordering(629) 00:11:31.634 fused_ordering(630) 00:11:31.634 fused_ordering(631) 00:11:31.634 fused_ordering(632) 00:11:31.634 fused_ordering(633) 00:11:31.634 fused_ordering(634) 00:11:31.634 fused_ordering(635) 00:11:31.634 fused_ordering(636) 00:11:31.634 fused_ordering(637) 00:11:31.634 fused_ordering(638) 00:11:31.634 fused_ordering(639) 00:11:31.634 fused_ordering(640) 00:11:31.634 fused_ordering(641) 00:11:31.634 fused_ordering(642) 00:11:31.634 fused_ordering(643) 00:11:31.634 fused_ordering(644) 00:11:31.634 fused_ordering(645) 00:11:31.634 fused_ordering(646) 00:11:31.634 fused_ordering(647) 00:11:31.634 fused_ordering(648) 00:11:31.634 fused_ordering(649) 00:11:31.634 fused_ordering(650) 00:11:31.634 fused_ordering(651) 00:11:31.634 fused_ordering(652) 00:11:31.634 fused_ordering(653) 00:11:31.634 fused_ordering(654) 00:11:31.634 fused_ordering(655) 00:11:31.634 fused_ordering(656) 00:11:31.634 fused_ordering(657) 00:11:31.634 fused_ordering(658) 00:11:31.634 fused_ordering(659) 00:11:31.634 fused_ordering(660) 00:11:31.634 fused_ordering(661) 00:11:31.634 fused_ordering(662) 00:11:31.634 fused_ordering(663) 00:11:31.634 fused_ordering(664) 00:11:31.634 fused_ordering(665) 00:11:31.634 fused_ordering(666) 00:11:31.634 fused_ordering(667) 00:11:31.634 fused_ordering(668) 00:11:31.634 fused_ordering(669) 00:11:31.634 fused_ordering(670) 00:11:31.634 fused_ordering(671) 00:11:31.634 fused_ordering(672) 00:11:31.634 fused_ordering(673) 00:11:31.634 fused_ordering(674) 00:11:31.634 fused_ordering(675) 00:11:31.634 fused_ordering(676) 00:11:31.634 fused_ordering(677) 00:11:31.634 fused_ordering(678) 00:11:31.634 fused_ordering(679) 00:11:31.634 fused_ordering(680) 00:11:31.634 fused_ordering(681) 00:11:31.634 fused_ordering(682) 00:11:31.634 fused_ordering(683) 00:11:31.634 fused_ordering(684) 00:11:31.634 fused_ordering(685) 00:11:31.634 fused_ordering(686) 00:11:31.634 fused_ordering(687) 00:11:31.634 fused_ordering(688) 00:11:31.634 fused_ordering(689) 00:11:31.635 fused_ordering(690) 00:11:31.635 fused_ordering(691) 00:11:31.635 fused_ordering(692) 00:11:31.635 fused_ordering(693) 00:11:31.635 fused_ordering(694) 00:11:31.635 fused_ordering(695) 00:11:31.635 fused_ordering(696) 00:11:31.635 fused_ordering(697) 00:11:31.635 fused_ordering(698) 00:11:31.635 fused_ordering(699) 00:11:31.635 fused_ordering(700) 00:11:31.635 fused_ordering(701) 00:11:31.635 fused_ordering(702) 00:11:31.635 fused_ordering(703) 00:11:31.635 fused_ordering(704) 00:11:31.635 fused_ordering(705) 00:11:31.635 fused_ordering(706) 00:11:31.635 fused_ordering(707) 00:11:31.635 fused_ordering(708) 00:11:31.635 fused_ordering(709) 00:11:31.635 fused_ordering(710) 00:11:31.635 fused_ordering(711) 00:11:31.635 fused_ordering(712) 00:11:31.635 fused_ordering(713) 00:11:31.635 fused_ordering(714) 00:11:31.635 fused_ordering(715) 00:11:31.635 fused_ordering(716) 00:11:31.635 fused_ordering(717) 00:11:31.635 fused_ordering(718) 00:11:31.635 fused_ordering(719) 00:11:31.635 fused_ordering(720) 00:11:31.635 fused_ordering(721) 00:11:31.635 fused_ordering(722) 00:11:31.635 fused_ordering(723) 00:11:31.635 fused_ordering(724) 00:11:31.635 fused_ordering(725) 00:11:31.635 fused_ordering(726) 00:11:31.635 fused_ordering(727) 00:11:31.635 fused_ordering(728) 00:11:31.635 fused_ordering(729) 00:11:31.635 fused_ordering(730) 00:11:31.635 fused_ordering(731) 00:11:31.635 fused_ordering(732) 00:11:31.635 fused_ordering(733) 00:11:31.635 fused_ordering(734) 00:11:31.635 fused_ordering(735) 00:11:31.635 fused_ordering(736) 00:11:31.635 fused_ordering(737) 00:11:31.635 fused_ordering(738) 00:11:31.635 fused_ordering(739) 00:11:31.635 fused_ordering(740) 00:11:31.635 fused_ordering(741) 00:11:31.635 fused_ordering(742) 00:11:31.635 fused_ordering(743) 00:11:31.635 fused_ordering(744) 00:11:31.635 fused_ordering(745) 00:11:31.635 fused_ordering(746) 00:11:31.635 fused_ordering(747) 00:11:31.635 fused_ordering(748) 00:11:31.635 fused_ordering(749) 00:11:31.635 fused_ordering(750) 00:11:31.635 fused_ordering(751) 00:11:31.635 fused_ordering(752) 00:11:31.635 fused_ordering(753) 00:11:31.635 fused_ordering(754) 00:11:31.635 fused_ordering(755) 00:11:31.635 fused_ordering(756) 00:11:31.635 fused_ordering(757) 00:11:31.635 fused_ordering(758) 00:11:31.635 fused_ordering(759) 00:11:31.635 fused_ordering(760) 00:11:31.635 fused_ordering(761) 00:11:31.635 fused_ordering(762) 00:11:31.635 fused_ordering(763) 00:11:31.635 fused_ordering(764) 00:11:31.635 fused_ordering(765) 00:11:31.635 fused_ordering(766) 00:11:31.635 fused_ordering(767) 00:11:31.635 fused_ordering(768) 00:11:31.635 fused_ordering(769) 00:11:31.635 fused_ordering(770) 00:11:31.635 fused_ordering(771) 00:11:31.635 fused_ordering(772) 00:11:31.635 fused_ordering(773) 00:11:31.635 fused_ordering(774) 00:11:31.635 fused_ordering(775) 00:11:31.635 fused_ordering(776) 00:11:31.635 fused_ordering(777) 00:11:31.635 fused_ordering(778) 00:11:31.635 fused_ordering(779) 00:11:31.635 fused_ordering(780) 00:11:31.635 fused_ordering(781) 00:11:31.635 fused_ordering(782) 00:11:31.635 fused_ordering(783) 00:11:31.635 fused_ordering(784) 00:11:31.635 fused_ordering(785) 00:11:31.635 fused_ordering(786) 00:11:31.635 fused_ordering(787) 00:11:31.635 fused_ordering(788) 00:11:31.635 fused_ordering(789) 00:11:31.635 fused_ordering(790) 00:11:31.635 fused_ordering(791) 00:11:31.635 fused_ordering(792) 00:11:31.635 fused_ordering(793) 00:11:31.635 fused_ordering(794) 00:11:31.635 fused_ordering(795) 00:11:31.635 fused_ordering(796) 00:11:31.635 fused_ordering(797) 00:11:31.635 fused_ordering(798) 00:11:31.635 fused_ordering(799) 00:11:31.635 fused_ordering(800) 00:11:31.635 fused_ordering(801) 00:11:31.635 fused_ordering(802) 00:11:31.635 fused_ordering(803) 00:11:31.635 fused_ordering(804) 00:11:31.635 fused_ordering(805) 00:11:31.635 fused_ordering(806) 00:11:31.635 fused_ordering(807) 00:11:31.635 fused_ordering(808) 00:11:31.635 fused_ordering(809) 00:11:31.635 fused_ordering(810) 00:11:31.635 fused_ordering(811) 00:11:31.635 fused_ordering(812) 00:11:31.635 fused_ordering(813) 00:11:31.635 fused_ordering(814) 00:11:31.635 fused_ordering(815) 00:11:31.635 fused_ordering(816) 00:11:31.635 fused_ordering(817) 00:11:31.635 fused_ordering(818) 00:11:31.635 fused_ordering(819) 00:11:31.635 fused_ordering(820) 00:11:32.205 fused_ordering(821) 00:11:32.205 fused_ordering(822) 00:11:32.205 fused_ordering(823) 00:11:32.205 fused_ordering(824) 00:11:32.205 fused_ordering(825) 00:11:32.205 fused_ordering(826) 00:11:32.205 fused_ordering(827) 00:11:32.205 fused_ordering(828) 00:11:32.205 fused_ordering(829) 00:11:32.205 fused_ordering(830) 00:11:32.205 fused_ordering(831) 00:11:32.205 fused_ordering(832) 00:11:32.205 fused_ordering(833) 00:11:32.205 fused_ordering(834) 00:11:32.205 fused_ordering(835) 00:11:32.205 fused_ordering(836) 00:11:32.205 fused_ordering(837) 00:11:32.205 fused_ordering(838) 00:11:32.205 fused_ordering(839) 00:11:32.205 fused_ordering(840) 00:11:32.205 fused_ordering(841) 00:11:32.205 fused_ordering(842) 00:11:32.205 fused_ordering(843) 00:11:32.205 fused_ordering(844) 00:11:32.205 fused_ordering(845) 00:11:32.205 fused_ordering(846) 00:11:32.205 fused_ordering(847) 00:11:32.205 fused_ordering(848) 00:11:32.205 fused_ordering(849) 00:11:32.205 fused_ordering(850) 00:11:32.205 fused_ordering(851) 00:11:32.205 fused_ordering(852) 00:11:32.205 fused_ordering(853) 00:11:32.205 fused_ordering(854) 00:11:32.205 fused_ordering(855) 00:11:32.205 fused_ordering(856) 00:11:32.205 fused_ordering(857) 00:11:32.205 fused_ordering(858) 00:11:32.205 fused_ordering(859) 00:11:32.205 fused_ordering(860) 00:11:32.205 fused_ordering(861) 00:11:32.205 fused_ordering(862) 00:11:32.205 fused_ordering(863) 00:11:32.205 fused_ordering(864) 00:11:32.205 fused_ordering(865) 00:11:32.205 fused_ordering(866) 00:11:32.205 fused_ordering(867) 00:11:32.205 fused_ordering(868) 00:11:32.205 fused_ordering(869) 00:11:32.205 fused_ordering(870) 00:11:32.205 fused_ordering(871) 00:11:32.205 fused_ordering(872) 00:11:32.205 fused_ordering(873) 00:11:32.205 fused_ordering(874) 00:11:32.205 fused_ordering(875) 00:11:32.205 fused_ordering(876) 00:11:32.205 fused_ordering(877) 00:11:32.205 fused_ordering(878) 00:11:32.205 fused_ordering(879) 00:11:32.205 fused_ordering(880) 00:11:32.205 fused_ordering(881) 00:11:32.205 fused_ordering(882) 00:11:32.205 fused_ordering(883) 00:11:32.205 fused_ordering(884) 00:11:32.205 fused_ordering(885) 00:11:32.205 fused_ordering(886) 00:11:32.205 fused_ordering(887) 00:11:32.205 fused_ordering(888) 00:11:32.205 fused_ordering(889) 00:11:32.205 fused_ordering(890) 00:11:32.205 fused_ordering(891) 00:11:32.205 fused_ordering(892) 00:11:32.205 fused_ordering(893) 00:11:32.205 fused_ordering(894) 00:11:32.205 fused_ordering(895) 00:11:32.205 fused_ordering(896) 00:11:32.205 fused_ordering(897) 00:11:32.205 fused_ordering(898) 00:11:32.205 fused_ordering(899) 00:11:32.205 fused_ordering(900) 00:11:32.205 fused_ordering(901) 00:11:32.205 fused_ordering(902) 00:11:32.205 fused_ordering(903) 00:11:32.205 fused_ordering(904) 00:11:32.205 fused_ordering(905) 00:11:32.205 fused_ordering(906) 00:11:32.205 fused_ordering(907) 00:11:32.205 fused_ordering(908) 00:11:32.205 fused_ordering(909) 00:11:32.205 fused_ordering(910) 00:11:32.205 fused_ordering(911) 00:11:32.205 fused_ordering(912) 00:11:32.205 fused_ordering(913) 00:11:32.205 fused_ordering(914) 00:11:32.205 fused_ordering(915) 00:11:32.205 fused_ordering(916) 00:11:32.205 fused_ordering(917) 00:11:32.205 fused_ordering(918) 00:11:32.205 fused_ordering(919) 00:11:32.205 fused_ordering(920) 00:11:32.205 fused_ordering(921) 00:11:32.205 fused_ordering(922) 00:11:32.205 fused_ordering(923) 00:11:32.205 fused_ordering(924) 00:11:32.205 fused_ordering(925) 00:11:32.205 fused_ordering(926) 00:11:32.205 fused_ordering(927) 00:11:32.205 fused_ordering(928) 00:11:32.205 fused_ordering(929) 00:11:32.205 fused_ordering(930) 00:11:32.205 fused_ordering(931) 00:11:32.205 fused_ordering(932) 00:11:32.205 fused_ordering(933) 00:11:32.205 fused_ordering(934) 00:11:32.205 fused_ordering(935) 00:11:32.205 fused_ordering(936) 00:11:32.205 fused_ordering(937) 00:11:32.205 fused_ordering(938) 00:11:32.205 fused_ordering(939) 00:11:32.205 fused_ordering(940) 00:11:32.205 fused_ordering(941) 00:11:32.205 fused_ordering(942) 00:11:32.205 fused_ordering(943) 00:11:32.205 fused_ordering(944) 00:11:32.205 fused_ordering(945) 00:11:32.205 fused_ordering(946) 00:11:32.205 fused_ordering(947) 00:11:32.205 fused_ordering(948) 00:11:32.205 fused_ordering(949) 00:11:32.205 fused_ordering(950) 00:11:32.205 fused_ordering(951) 00:11:32.205 fused_ordering(952) 00:11:32.205 fused_ordering(953) 00:11:32.205 fused_ordering(954) 00:11:32.205 fused_ordering(955) 00:11:32.205 fused_ordering(956) 00:11:32.205 fused_ordering(957) 00:11:32.205 fused_ordering(958) 00:11:32.205 fused_ordering(959) 00:11:32.205 fused_ordering(960) 00:11:32.205 fused_ordering(961) 00:11:32.205 fused_ordering(962) 00:11:32.205 fused_ordering(963) 00:11:32.205 fused_ordering(964) 00:11:32.205 fused_ordering(965) 00:11:32.205 fused_ordering(966) 00:11:32.205 fused_ordering(967) 00:11:32.205 fused_ordering(968) 00:11:32.205 fused_ordering(969) 00:11:32.205 fused_ordering(970) 00:11:32.205 fused_ordering(971) 00:11:32.205 fused_ordering(972) 00:11:32.205 fused_ordering(973) 00:11:32.205 fused_ordering(974) 00:11:32.205 fused_ordering(975) 00:11:32.205 fused_ordering(976) 00:11:32.205 fused_ordering(977) 00:11:32.205 fused_ordering(978) 00:11:32.205 fused_ordering(979) 00:11:32.205 fused_ordering(980) 00:11:32.205 fused_ordering(981) 00:11:32.205 fused_ordering(982) 00:11:32.205 fused_ordering(983) 00:11:32.205 fused_ordering(984) 00:11:32.205 fused_ordering(985) 00:11:32.205 fused_ordering(986) 00:11:32.205 fused_ordering(987) 00:11:32.205 fused_ordering(988) 00:11:32.205 fused_ordering(989) 00:11:32.205 fused_ordering(990) 00:11:32.205 fused_ordering(991) 00:11:32.205 fused_ordering(992) 00:11:32.205 fused_ordering(993) 00:11:32.205 fused_ordering(994) 00:11:32.206 fused_ordering(995) 00:11:32.206 fused_ordering(996) 00:11:32.206 fused_ordering(997) 00:11:32.206 fused_ordering(998) 00:11:32.206 fused_ordering(999) 00:11:32.206 fused_ordering(1000) 00:11:32.206 fused_ordering(1001) 00:11:32.206 fused_ordering(1002) 00:11:32.206 fused_ordering(1003) 00:11:32.206 fused_ordering(1004) 00:11:32.206 fused_ordering(1005) 00:11:32.206 fused_ordering(1006) 00:11:32.206 fused_ordering(1007) 00:11:32.206 fused_ordering(1008) 00:11:32.206 fused_ordering(1009) 00:11:32.206 fused_ordering(1010) 00:11:32.206 fused_ordering(1011) 00:11:32.206 fused_ordering(1012) 00:11:32.206 fused_ordering(1013) 00:11:32.206 fused_ordering(1014) 00:11:32.206 fused_ordering(1015) 00:11:32.206 fused_ordering(1016) 00:11:32.206 fused_ordering(1017) 00:11:32.206 fused_ordering(1018) 00:11:32.206 fused_ordering(1019) 00:11:32.206 fused_ordering(1020) 00:11:32.206 fused_ordering(1021) 00:11:32.206 fused_ordering(1022) 00:11:32.206 fused_ordering(1023) 00:11:32.206 14:48:14 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:32.206 14:48:14 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:32.206 14:48:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:32.206 14:48:14 -- nvmf/common.sh@117 -- # sync 00:11:32.206 14:48:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:32.206 14:48:14 -- nvmf/common.sh@120 -- # set +e 00:11:32.206 14:48:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:32.206 14:48:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:32.206 rmmod nvme_tcp 00:11:32.206 rmmod nvme_fabrics 00:11:32.206 rmmod nvme_keyring 00:11:32.206 14:48:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:32.206 14:48:14 -- nvmf/common.sh@124 -- # set -e 00:11:32.206 14:48:14 -- nvmf/common.sh@125 -- # return 0 00:11:32.206 14:48:14 -- nvmf/common.sh@478 -- # '[' -n 965016 ']' 00:11:32.206 14:48:14 -- nvmf/common.sh@479 -- # killprocess 965016 00:11:32.206 14:48:14 -- common/autotest_common.sh@936 -- # '[' -z 965016 ']' 00:11:32.206 14:48:14 -- common/autotest_common.sh@940 -- # kill -0 965016 00:11:32.466 14:48:14 -- common/autotest_common.sh@941 -- # uname 00:11:32.466 14:48:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:32.466 14:48:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 965016 00:11:32.466 14:48:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:32.466 14:48:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:32.466 14:48:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 965016' 00:11:32.466 killing process with pid 965016 00:11:32.466 14:48:14 -- common/autotest_common.sh@955 -- # kill 965016 00:11:32.466 14:48:14 -- common/autotest_common.sh@960 -- # wait 965016 00:11:32.466 14:48:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:32.466 14:48:15 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:32.466 14:48:15 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:32.466 14:48:15 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:32.466 14:48:15 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:32.466 14:48:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.466 14:48:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:32.466 14:48:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.008 14:48:17 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:35.008 00:11:35.008 real 0m13.060s 00:11:35.008 user 0m6.914s 00:11:35.008 sys 0m6.765s 00:11:35.008 14:48:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:35.008 14:48:17 -- common/autotest_common.sh@10 -- # set +x 00:11:35.008 ************************************ 00:11:35.008 END TEST nvmf_fused_ordering 00:11:35.008 ************************************ 00:11:35.008 14:48:17 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:35.008 14:48:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:35.008 14:48:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:35.008 14:48:17 -- common/autotest_common.sh@10 -- # set +x 00:11:35.008 ************************************ 00:11:35.008 START TEST nvmf_delete_subsystem 00:11:35.008 ************************************ 00:11:35.008 14:48:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:35.008 * Looking for test storage... 00:11:35.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.008 14:48:17 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.008 14:48:17 -- nvmf/common.sh@7 -- # uname -s 00:11:35.008 14:48:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.008 14:48:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.008 14:48:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.008 14:48:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.008 14:48:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.008 14:48:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.008 14:48:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.008 14:48:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.008 14:48:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.008 14:48:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.008 14:48:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:35.008 14:48:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:35.008 14:48:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.008 14:48:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.008 14:48:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.008 14:48:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.008 14:48:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.008 14:48:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.008 14:48:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.008 14:48:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.008 14:48:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.008 14:48:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.008 14:48:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.008 14:48:17 -- paths/export.sh@5 -- # export PATH 00:11:35.008 14:48:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.008 14:48:17 -- nvmf/common.sh@47 -- # : 0 00:11:35.008 14:48:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:35.008 14:48:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:35.008 14:48:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.008 14:48:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.008 14:48:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.008 14:48:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:35.008 14:48:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:35.008 14:48:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:35.008 14:48:17 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:35.008 14:48:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:35.008 14:48:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.008 14:48:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:35.008 14:48:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:35.008 14:48:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:35.008 14:48:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.008 14:48:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:35.008 14:48:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.008 14:48:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:35.008 14:48:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:35.008 14:48:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:35.008 14:48:17 -- common/autotest_common.sh@10 -- # set +x 00:11:41.647 14:48:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:41.647 14:48:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:41.647 14:48:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:41.647 14:48:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:41.647 14:48:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:41.647 14:48:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:41.647 14:48:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:41.647 14:48:24 -- nvmf/common.sh@295 -- # net_devs=() 00:11:41.647 14:48:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:41.647 14:48:24 -- nvmf/common.sh@296 -- # e810=() 00:11:41.647 14:48:24 -- nvmf/common.sh@296 -- # local -ga e810 00:11:41.647 14:48:24 -- nvmf/common.sh@297 -- # x722=() 00:11:41.647 14:48:24 -- nvmf/common.sh@297 -- # local -ga x722 00:11:41.647 14:48:24 -- nvmf/common.sh@298 -- # mlx=() 00:11:41.647 14:48:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:41.647 14:48:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.647 14:48:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.647 14:48:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.647 14:48:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.647 14:48:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.647 14:48:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.647 14:48:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.647 14:48:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.647 14:48:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.647 14:48:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.647 14:48:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.647 14:48:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:41.647 14:48:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:41.647 14:48:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:41.647 14:48:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:41.647 14:48:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:41.647 14:48:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:41.647 14:48:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:41.647 14:48:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:41.647 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:41.647 14:48:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:41.647 14:48:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:41.647 14:48:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.647 14:48:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.647 14:48:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:41.647 14:48:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:41.647 14:48:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:41.647 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:41.647 14:48:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:41.647 14:48:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:41.647 14:48:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.647 14:48:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.647 14:48:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:41.647 14:48:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:41.647 14:48:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:41.647 14:48:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:41.647 14:48:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:41.647 14:48:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.647 14:48:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:41.647 14:48:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.647 14:48:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:41.647 Found net devices under 0000:31:00.0: cvl_0_0 00:11:41.647 14:48:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.647 14:48:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:41.647 14:48:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.647 14:48:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:41.647 14:48:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.647 14:48:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:41.647 Found net devices under 0000:31:00.1: cvl_0_1 00:11:41.647 14:48:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.647 14:48:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:41.647 14:48:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:41.647 14:48:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:41.647 14:48:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:41.647 14:48:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:41.647 14:48:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:41.647 14:48:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:41.647 14:48:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:41.647 14:48:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:41.647 14:48:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:41.647 14:48:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:41.647 14:48:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:41.647 14:48:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:41.647 14:48:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:41.647 14:48:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:41.647 14:48:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:41.908 14:48:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:41.908 14:48:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:41.908 14:48:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:41.908 14:48:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:41.908 14:48:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:41.908 14:48:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:42.171 14:48:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:42.171 14:48:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:42.172 14:48:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:42.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:11:42.172 00:11:42.172 --- 10.0.0.2 ping statistics --- 00:11:42.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.172 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:11:42.172 14:48:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:42.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:11:42.172 00:11:42.172 --- 10.0.0.1 ping statistics --- 00:11:42.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.172 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:11:42.172 14:48:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.172 14:48:24 -- nvmf/common.sh@411 -- # return 0 00:11:42.172 14:48:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:42.172 14:48:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.172 14:48:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:42.172 14:48:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:42.172 14:48:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.172 14:48:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:42.172 14:48:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:42.172 14:48:24 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:42.172 14:48:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:42.172 14:48:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:42.172 14:48:24 -- common/autotest_common.sh@10 -- # set +x 00:11:42.172 14:48:24 -- nvmf/common.sh@470 -- # nvmfpid=969996 00:11:42.172 14:48:24 -- nvmf/common.sh@471 -- # waitforlisten 969996 00:11:42.172 14:48:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:42.172 14:48:24 -- common/autotest_common.sh@817 -- # '[' -z 969996 ']' 00:11:42.172 14:48:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.172 14:48:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:42.172 14:48:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.172 14:48:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:42.172 14:48:24 -- common/autotest_common.sh@10 -- # set +x 00:11:42.172 [2024-04-26 14:48:24.703404] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:11:42.172 [2024-04-26 14:48:24.703475] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.172 EAL: No free 2048 kB hugepages reported on node 1 00:11:42.172 [2024-04-26 14:48:24.775260] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:42.460 [2024-04-26 14:48:24.848325] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.460 [2024-04-26 14:48:24.848365] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.460 [2024-04-26 14:48:24.848373] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.460 [2024-04-26 14:48:24.848379] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.460 [2024-04-26 14:48:24.848385] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.460 [2024-04-26 14:48:24.848455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.460 [2024-04-26 14:48:24.848456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.042 14:48:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:43.042 14:48:25 -- common/autotest_common.sh@850 -- # return 0 00:11:43.042 14:48:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:43.042 14:48:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:43.042 14:48:25 -- common/autotest_common.sh@10 -- # set +x 00:11:43.042 14:48:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.042 14:48:25 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:43.042 14:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:43.042 14:48:25 -- common/autotest_common.sh@10 -- # set +x 00:11:43.042 [2024-04-26 14:48:25.516333] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.042 14:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:43.042 14:48:25 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:43.042 14:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:43.042 14:48:25 -- common/autotest_common.sh@10 -- # set +x 00:11:43.042 14:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:43.042 14:48:25 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:43.042 14:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:43.042 14:48:25 -- common/autotest_common.sh@10 -- # set +x 00:11:43.042 [2024-04-26 14:48:25.532477] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:43.042 14:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:43.042 14:48:25 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:43.042 14:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:43.042 14:48:25 -- common/autotest_common.sh@10 -- # set +x 00:11:43.042 NULL1 00:11:43.042 14:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:43.042 14:48:25 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:43.042 14:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:43.042 14:48:25 -- common/autotest_common.sh@10 -- # set +x 00:11:43.042 Delay0 00:11:43.042 14:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:43.042 14:48:25 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:43.042 14:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:43.042 14:48:25 -- common/autotest_common.sh@10 -- # set +x 00:11:43.042 14:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:43.042 14:48:25 -- target/delete_subsystem.sh@28 -- # perf_pid=970060 00:11:43.042 14:48:25 -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:43.042 14:48:25 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:43.042 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.042 [2024-04-26 14:48:25.617132] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:44.954 14:48:27 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:44.954 14:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:44.954 14:48:27 -- common/autotest_common.sh@10 -- # set +x 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 starting I/O failed: -6 00:11:45.216 starting I/O failed: -6 00:11:45.216 starting I/O failed: -6 00:11:45.216 starting I/O failed: -6 00:11:45.216 starting I/O failed: -6 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 starting I/O failed: -6 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 [2024-04-26 14:48:27.705187] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f00a000c3d0 is same with the state(5) to be set 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Write completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.216 Read completed with error (sct=0, sc=8) 00:11:45.217 Read completed with error (sct=0, sc=8) 00:11:45.217 Read completed with error (sct=0, sc=8) 00:11:45.217 Write completed with error (sct=0, sc=8) 00:11:45.217 Read completed with error (sct=0, sc=8) 00:11:45.217 Write completed with error (sct=0, sc=8) 00:11:45.217 Read completed with error (sct=0, sc=8) 00:11:45.217 Read completed with error (sct=0, sc=8) 00:11:45.217 Read completed with error (sct=0, sc=8) 00:11:45.217 Read completed with error (sct=0, sc=8) 00:11:45.217 Read completed with error (sct=0, sc=8) 00:11:45.217 Read completed with error (sct=0, sc=8) 00:11:45.217 Write completed with error (sct=0, sc=8) 00:11:45.217 Read completed with error (sct=0, sc=8) 00:11:45.217 Write completed with error (sct=0, sc=8) 00:11:45.217 Write completed with error (sct=0, sc=8) 00:11:45.217 Write completed with error (sct=0, sc=8) 00:11:45.217 Read completed with error (sct=0, sc=8) 00:11:45.217 Read completed with error (sct=0, sc=8) 00:11:45.217 Read completed with error (sct=0, sc=8) 00:11:45.217 Read completed with error (sct=0, sc=8) 00:11:45.217 Read completed with error (sct=0, sc=8) 00:11:45.217 Read completed with error (sct=0, sc=8) 00:11:45.217 Write completed with error (sct=0, sc=8) 00:11:45.217 Read completed with error (sct=0, sc=8) 00:11:45.217 Read completed with error (sct=0, sc=8) 00:11:45.217 Write completed with error (sct=0, sc=8) 00:11:45.217 Write completed with error (sct=0, sc=8) 00:11:45.217 Read completed with error (sct=0, sc=8) 00:11:45.217 Write completed with error (sct=0, sc=8) 00:11:45.217 Read completed with error (sct=0, sc=8) 00:11:45.217 Write completed with error (sct=0, sc=8) 00:11:45.217 Write completed with error (sct=0, sc=8) 00:11:45.217 Write completed with error (sct=0, sc=8) 00:11:45.217 Write completed with error (sct=0, sc=8) 00:11:45.217 Read completed with error (sct=0, sc=8) 00:11:45.217 Write completed with error (sct=0, sc=8) 00:11:45.217 Read completed with error (sct=0, sc=8) 00:11:45.217 Read completed with error (sct=0, sc=8) 00:11:45.217 Write completed with error (sct=0, sc=8) 00:11:45.217 Write completed with error (sct=0, sc=8) 00:11:45.217 Read completed with error (sct=0, sc=8) 00:11:45.217 Read completed with error (sct=0, sc=8) 00:11:46.160 [2024-04-26 14:48:28.671268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eac40 is same with the state(5) to be set 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 [2024-04-26 14:48:28.704550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d4cd0 is same with the state(5) to be set 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 [2024-04-26 14:48:28.704692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d4910 is same with the state(5) to be set 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 [2024-04-26 14:48:28.707361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f00a000c690 is same with the state(5) to be set 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Write completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.160 Read completed with error (sct=0, sc=8) 00:11:46.161 Write completed with error (sct=0, sc=8) 00:11:46.161 Read completed with error (sct=0, sc=8) 00:11:46.161 Read completed with error (sct=0, sc=8) 00:11:46.161 Read completed with error (sct=0, sc=8) 00:11:46.161 Read completed with error (sct=0, sc=8) 00:11:46.161 Read completed with error (sct=0, sc=8) 00:11:46.161 Read completed with error (sct=0, sc=8) 00:11:46.161 [2024-04-26 14:48:28.707535] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f00a000bf90 is same with the state(5) to be set 00:11:46.161 [2024-04-26 14:48:28.707972] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20eac40 (9): Bad file descriptor 00:11:46.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:46.161 Initializing NVMe Controllers 00:11:46.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:46.161 Controller IO queue size 128, less than required. 00:11:46.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:46.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:46.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:46.161 Initialization complete. Launching workers. 00:11:46.161 ======================================================== 00:11:46.161 Latency(us) 00:11:46.161 Device Information : IOPS MiB/s Average min max 00:11:46.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 184.34 0.09 908423.59 292.60 1006793.24 00:11:46.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.91 0.08 944517.89 308.52 2001387.94 00:11:46.161 ======================================================== 00:11:46.161 Total : 350.25 0.17 925520.89 292.60 2001387.94 00:11:46.161 00:11:46.161 14:48:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.161 14:48:28 -- target/delete_subsystem.sh@34 -- # delay=0 00:11:46.161 14:48:28 -- target/delete_subsystem.sh@35 -- # kill -0 970060 00:11:46.161 14:48:28 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:46.732 14:48:29 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:46.732 14:48:29 -- target/delete_subsystem.sh@35 -- # kill -0 970060 00:11:46.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (970060) - No such process 00:11:46.732 14:48:29 -- target/delete_subsystem.sh@45 -- # NOT wait 970060 00:11:46.732 14:48:29 -- common/autotest_common.sh@638 -- # local es=0 00:11:46.732 14:48:29 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 970060 00:11:46.732 14:48:29 -- common/autotest_common.sh@626 -- # local arg=wait 00:11:46.732 14:48:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:46.732 14:48:29 -- common/autotest_common.sh@630 -- # type -t wait 00:11:46.732 14:48:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:46.732 14:48:29 -- common/autotest_common.sh@641 -- # wait 970060 00:11:46.732 14:48:29 -- common/autotest_common.sh@641 -- # es=1 00:11:46.732 14:48:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:46.732 14:48:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:46.732 14:48:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:46.732 14:48:29 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:46.732 14:48:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.732 14:48:29 -- common/autotest_common.sh@10 -- # set +x 00:11:46.732 14:48:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.732 14:48:29 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.732 14:48:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.732 14:48:29 -- common/autotest_common.sh@10 -- # set +x 00:11:46.732 [2024-04-26 14:48:29.240477] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.732 14:48:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.732 14:48:29 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:46.732 14:48:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.732 14:48:29 -- common/autotest_common.sh@10 -- # set +x 00:11:46.732 14:48:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.732 14:48:29 -- target/delete_subsystem.sh@54 -- # perf_pid=970875 00:11:46.732 14:48:29 -- target/delete_subsystem.sh@56 -- # delay=0 00:11:46.733 14:48:29 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:46.733 14:48:29 -- target/delete_subsystem.sh@57 -- # kill -0 970875 00:11:46.733 14:48:29 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:46.733 EAL: No free 2048 kB hugepages reported on node 1 00:11:46.733 [2024-04-26 14:48:29.307888] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:47.303 14:48:29 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:47.303 14:48:29 -- target/delete_subsystem.sh@57 -- # kill -0 970875 00:11:47.303 14:48:29 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:47.875 14:48:30 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:47.875 14:48:30 -- target/delete_subsystem.sh@57 -- # kill -0 970875 00:11:47.875 14:48:30 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:48.136 14:48:30 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:48.136 14:48:30 -- target/delete_subsystem.sh@57 -- # kill -0 970875 00:11:48.136 14:48:30 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:48.708 14:48:31 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:48.708 14:48:31 -- target/delete_subsystem.sh@57 -- # kill -0 970875 00:11:48.708 14:48:31 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:49.280 14:48:31 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:49.280 14:48:31 -- target/delete_subsystem.sh@57 -- # kill -0 970875 00:11:49.280 14:48:31 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:49.852 14:48:32 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:49.852 14:48:32 -- target/delete_subsystem.sh@57 -- # kill -0 970875 00:11:49.852 14:48:32 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:49.852 Initializing NVMe Controllers 00:11:49.852 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:49.852 Controller IO queue size 128, less than required. 00:11:49.852 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:49.852 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:49.852 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:49.852 Initialization complete. Launching workers. 00:11:49.852 ======================================================== 00:11:49.852 Latency(us) 00:11:49.852 Device Information : IOPS MiB/s Average min max 00:11:49.852 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002036.40 1000130.93 1042871.23 00:11:49.852 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003280.69 1000258.07 1009807.68 00:11:49.852 ======================================================== 00:11:49.852 Total : 256.00 0.12 1002658.54 1000130.93 1042871.23 00:11:49.852 00:11:50.423 14:48:32 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:50.423 14:48:32 -- target/delete_subsystem.sh@57 -- # kill -0 970875 00:11:50.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (970875) - No such process 00:11:50.423 14:48:32 -- target/delete_subsystem.sh@67 -- # wait 970875 00:11:50.423 14:48:32 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:50.423 14:48:32 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:50.423 14:48:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:50.423 14:48:32 -- nvmf/common.sh@117 -- # sync 00:11:50.423 14:48:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:50.423 14:48:32 -- nvmf/common.sh@120 -- # set +e 00:11:50.423 14:48:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:50.423 14:48:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:50.423 rmmod nvme_tcp 00:11:50.423 rmmod nvme_fabrics 00:11:50.423 rmmod nvme_keyring 00:11:50.423 14:48:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:50.423 14:48:32 -- nvmf/common.sh@124 -- # set -e 00:11:50.423 14:48:32 -- nvmf/common.sh@125 -- # return 0 00:11:50.423 14:48:32 -- nvmf/common.sh@478 -- # '[' -n 969996 ']' 00:11:50.423 14:48:32 -- nvmf/common.sh@479 -- # killprocess 969996 00:11:50.423 14:48:32 -- common/autotest_common.sh@936 -- # '[' -z 969996 ']' 00:11:50.423 14:48:32 -- common/autotest_common.sh@940 -- # kill -0 969996 00:11:50.423 14:48:32 -- common/autotest_common.sh@941 -- # uname 00:11:50.423 14:48:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:50.423 14:48:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 969996 00:11:50.423 14:48:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:50.423 14:48:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:50.423 14:48:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 969996' 00:11:50.423 killing process with pid 969996 00:11:50.423 14:48:32 -- common/autotest_common.sh@955 -- # kill 969996 00:11:50.423 14:48:32 -- common/autotest_common.sh@960 -- # wait 969996 00:11:50.423 14:48:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:50.423 14:48:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:50.423 14:48:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:50.423 14:48:33 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:50.423 14:48:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:50.423 14:48:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.423 14:48:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:50.423 14:48:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.967 14:48:35 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:52.967 00:11:52.967 real 0m17.814s 00:11:52.967 user 0m30.533s 00:11:52.967 sys 0m6.085s 00:11:52.967 14:48:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:52.967 14:48:35 -- common/autotest_common.sh@10 -- # set +x 00:11:52.967 ************************************ 00:11:52.967 END TEST nvmf_delete_subsystem 00:11:52.967 ************************************ 00:11:52.967 14:48:35 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:52.967 14:48:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:52.967 14:48:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:52.967 14:48:35 -- common/autotest_common.sh@10 -- # set +x 00:11:52.967 ************************************ 00:11:52.967 START TEST nvmf_ns_masking 00:11:52.967 ************************************ 00:11:52.967 14:48:35 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:52.967 * Looking for test storage... 00:11:52.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.967 14:48:35 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:52.967 14:48:35 -- nvmf/common.sh@7 -- # uname -s 00:11:52.967 14:48:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.967 14:48:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.967 14:48:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.967 14:48:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.967 14:48:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.967 14:48:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.967 14:48:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.967 14:48:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.967 14:48:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.967 14:48:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.967 14:48:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:52.967 14:48:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:52.967 14:48:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.967 14:48:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.967 14:48:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:52.967 14:48:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:52.967 14:48:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:52.967 14:48:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.967 14:48:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.967 14:48:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.967 14:48:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.967 14:48:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.967 14:48:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.967 14:48:35 -- paths/export.sh@5 -- # export PATH 00:11:52.967 14:48:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.967 14:48:35 -- nvmf/common.sh@47 -- # : 0 00:11:52.967 14:48:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:52.967 14:48:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:52.967 14:48:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:52.967 14:48:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.967 14:48:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.967 14:48:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:52.967 14:48:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:52.967 14:48:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:52.967 14:48:35 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:52.967 14:48:35 -- target/ns_masking.sh@11 -- # loops=5 00:11:52.967 14:48:35 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:52.967 14:48:35 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:11:52.967 14:48:35 -- target/ns_masking.sh@15 -- # uuidgen 00:11:52.967 14:48:35 -- target/ns_masking.sh@15 -- # HOSTID=c07dccb0-7c43-4404-9c6f-f87d95edfcb0 00:11:52.967 14:48:35 -- target/ns_masking.sh@44 -- # nvmftestinit 00:11:52.967 14:48:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:52.967 14:48:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.967 14:48:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:52.967 14:48:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:52.967 14:48:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:52.967 14:48:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.967 14:48:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:52.967 14:48:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.967 14:48:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:52.967 14:48:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:52.967 14:48:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:52.967 14:48:35 -- common/autotest_common.sh@10 -- # set +x 00:12:01.111 14:48:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:01.111 14:48:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:01.111 14:48:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:01.111 14:48:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:01.111 14:48:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:01.111 14:48:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:01.111 14:48:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:01.111 14:48:42 -- nvmf/common.sh@295 -- # net_devs=() 00:12:01.111 14:48:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:01.111 14:48:42 -- nvmf/common.sh@296 -- # e810=() 00:12:01.111 14:48:42 -- nvmf/common.sh@296 -- # local -ga e810 00:12:01.111 14:48:42 -- nvmf/common.sh@297 -- # x722=() 00:12:01.111 14:48:42 -- nvmf/common.sh@297 -- # local -ga x722 00:12:01.111 14:48:42 -- nvmf/common.sh@298 -- # mlx=() 00:12:01.111 14:48:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:01.111 14:48:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:01.111 14:48:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:01.111 14:48:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:01.111 14:48:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:01.111 14:48:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:01.111 14:48:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:01.111 14:48:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:01.111 14:48:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:01.111 14:48:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:01.111 14:48:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:01.111 14:48:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:01.111 14:48:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:01.111 14:48:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:01.111 14:48:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:01.111 14:48:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:01.111 14:48:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:01.111 14:48:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:01.111 14:48:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:01.111 14:48:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:01.111 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:01.111 14:48:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:01.111 14:48:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:01.111 14:48:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.111 14:48:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.111 14:48:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:01.111 14:48:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:01.111 14:48:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:01.111 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:01.111 14:48:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:01.111 14:48:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:01.111 14:48:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.111 14:48:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.111 14:48:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:01.111 14:48:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:01.111 14:48:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:01.111 14:48:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:01.111 14:48:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:01.111 14:48:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.111 14:48:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:01.111 14:48:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.111 14:48:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:01.111 Found net devices under 0000:31:00.0: cvl_0_0 00:12:01.111 14:48:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.111 14:48:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:01.111 14:48:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.111 14:48:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:01.111 14:48:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.111 14:48:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:01.111 Found net devices under 0000:31:00.1: cvl_0_1 00:12:01.111 14:48:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.111 14:48:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:01.111 14:48:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:01.111 14:48:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:01.111 14:48:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:01.111 14:48:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:01.111 14:48:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:01.111 14:48:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:01.111 14:48:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:01.111 14:48:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:01.111 14:48:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:01.111 14:48:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:01.111 14:48:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:01.111 14:48:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:01.111 14:48:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:01.111 14:48:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:01.111 14:48:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:01.111 14:48:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:01.111 14:48:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:01.111 14:48:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:01.111 14:48:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:01.111 14:48:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:01.111 14:48:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:01.111 14:48:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:01.111 14:48:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:01.111 14:48:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:01.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:01.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:12:01.111 00:12:01.111 --- 10.0.0.2 ping statistics --- 00:12:01.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.111 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:12:01.111 14:48:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:01.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:01.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:12:01.111 00:12:01.111 --- 10.0.0.1 ping statistics --- 00:12:01.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.111 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:12:01.111 14:48:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:01.111 14:48:42 -- nvmf/common.sh@411 -- # return 0 00:12:01.111 14:48:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:01.111 14:48:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:01.111 14:48:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:01.112 14:48:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:01.112 14:48:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:01.112 14:48:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:01.112 14:48:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:01.112 14:48:42 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:12:01.112 14:48:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:01.112 14:48:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:01.112 14:48:42 -- common/autotest_common.sh@10 -- # set +x 00:12:01.112 14:48:42 -- nvmf/common.sh@470 -- # nvmfpid=975779 00:12:01.112 14:48:42 -- nvmf/common.sh@471 -- # waitforlisten 975779 00:12:01.112 14:48:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:01.112 14:48:42 -- common/autotest_common.sh@817 -- # '[' -z 975779 ']' 00:12:01.112 14:48:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.112 14:48:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:01.112 14:48:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.112 14:48:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:01.112 14:48:42 -- common/autotest_common.sh@10 -- # set +x 00:12:01.112 [2024-04-26 14:48:42.773312] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:12:01.112 [2024-04-26 14:48:42.773375] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.112 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.112 [2024-04-26 14:48:42.846739] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:01.112 [2024-04-26 14:48:42.922363] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.112 [2024-04-26 14:48:42.922403] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.112 [2024-04-26 14:48:42.922412] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.112 [2024-04-26 14:48:42.922422] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.112 [2024-04-26 14:48:42.922429] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.112 [2024-04-26 14:48:42.925857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.112 [2024-04-26 14:48:42.926022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:01.112 [2024-04-26 14:48:42.926191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:01.112 [2024-04-26 14:48:42.926192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.112 14:48:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:01.112 14:48:43 -- common/autotest_common.sh@850 -- # return 0 00:12:01.112 14:48:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:01.112 14:48:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:01.112 14:48:43 -- common/autotest_common.sh@10 -- # set +x 00:12:01.112 14:48:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.112 14:48:43 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:01.112 [2024-04-26 14:48:43.734862] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.112 14:48:43 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:12:01.112 14:48:43 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:12:01.112 14:48:43 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:01.373 Malloc1 00:12:01.373 14:48:43 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:01.633 Malloc2 00:12:01.633 14:48:44 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:01.633 14:48:44 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:01.894 14:48:44 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.155 [2024-04-26 14:48:44.563431] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.155 14:48:44 -- target/ns_masking.sh@61 -- # connect 00:12:02.155 14:48:44 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c07dccb0-7c43-4404-9c6f-f87d95edfcb0 -a 10.0.0.2 -s 4420 -i 4 00:12:02.155 14:48:44 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:12:02.155 14:48:44 -- common/autotest_common.sh@1184 -- # local i=0 00:12:02.155 14:48:44 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:02.155 14:48:44 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:02.155 14:48:44 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:04.086 14:48:46 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:04.086 14:48:46 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:04.086 14:48:46 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:04.086 14:48:46 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:04.086 14:48:46 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:04.086 14:48:46 -- common/autotest_common.sh@1194 -- # return 0 00:12:04.086 14:48:46 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:04.086 14:48:46 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:04.353 14:48:46 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:04.353 14:48:46 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:04.353 14:48:46 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:12:04.353 14:48:46 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:04.353 14:48:46 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:04.353 [ 0]:0x1 00:12:04.353 14:48:46 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:04.353 14:48:46 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:04.353 14:48:46 -- target/ns_masking.sh@40 -- # nguid=def859529f304aa6bf42956db56a5c90 00:12:04.353 14:48:46 -- target/ns_masking.sh@41 -- # [[ def859529f304aa6bf42956db56a5c90 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.353 14:48:46 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:04.613 14:48:47 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:12:04.613 14:48:47 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:04.613 14:48:47 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:04.613 [ 0]:0x1 00:12:04.613 14:48:47 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:04.613 14:48:47 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:04.613 14:48:47 -- target/ns_masking.sh@40 -- # nguid=def859529f304aa6bf42956db56a5c90 00:12:04.613 14:48:47 -- target/ns_masking.sh@41 -- # [[ def859529f304aa6bf42956db56a5c90 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.613 14:48:47 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:12:04.613 14:48:47 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:04.613 14:48:47 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:04.613 [ 1]:0x2 00:12:04.613 14:48:47 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:04.613 14:48:47 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:04.613 14:48:47 -- target/ns_masking.sh@40 -- # nguid=1c532573099d49f0873c57359ab034cf 00:12:04.613 14:48:47 -- target/ns_masking.sh@41 -- # [[ 1c532573099d49f0873c57359ab034cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.613 14:48:47 -- target/ns_masking.sh@69 -- # disconnect 00:12:04.613 14:48:47 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:04.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.613 14:48:47 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.874 14:48:47 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:05.135 14:48:47 -- target/ns_masking.sh@77 -- # connect 1 00:12:05.135 14:48:47 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c07dccb0-7c43-4404-9c6f-f87d95edfcb0 -a 10.0.0.2 -s 4420 -i 4 00:12:05.135 14:48:47 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:05.135 14:48:47 -- common/autotest_common.sh@1184 -- # local i=0 00:12:05.135 14:48:47 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:05.135 14:48:47 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:12:05.135 14:48:47 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:12:05.135 14:48:47 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:07.680 14:48:49 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:07.680 14:48:49 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:07.680 14:48:49 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:07.680 14:48:49 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:07.680 14:48:49 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:07.680 14:48:49 -- common/autotest_common.sh@1194 -- # return 0 00:12:07.680 14:48:49 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:07.680 14:48:49 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:07.680 14:48:49 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:07.680 14:48:49 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:07.680 14:48:49 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:12:07.680 14:48:49 -- common/autotest_common.sh@638 -- # local es=0 00:12:07.680 14:48:49 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:12:07.680 14:48:49 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:12:07.680 14:48:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:07.680 14:48:49 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:12:07.680 14:48:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:07.680 14:48:49 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:12:07.680 14:48:49 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:07.680 14:48:49 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:07.680 14:48:49 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:07.680 14:48:49 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:07.680 14:48:49 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:07.680 14:48:49 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.680 14:48:49 -- common/autotest_common.sh@641 -- # es=1 00:12:07.680 14:48:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:07.680 14:48:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:07.680 14:48:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:07.680 14:48:49 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:12:07.680 14:48:49 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:07.680 14:48:49 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:07.680 [ 0]:0x2 00:12:07.680 14:48:49 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:07.680 14:48:49 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:07.680 14:48:49 -- target/ns_masking.sh@40 -- # nguid=1c532573099d49f0873c57359ab034cf 00:12:07.680 14:48:49 -- target/ns_masking.sh@41 -- # [[ 1c532573099d49f0873c57359ab034cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.680 14:48:49 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:07.680 14:48:50 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:12:07.680 14:48:50 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:07.680 14:48:50 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:07.680 [ 0]:0x1 00:12:07.680 14:48:50 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:07.680 14:48:50 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:07.680 14:48:50 -- target/ns_masking.sh@40 -- # nguid=def859529f304aa6bf42956db56a5c90 00:12:07.680 14:48:50 -- target/ns_masking.sh@41 -- # [[ def859529f304aa6bf42956db56a5c90 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.680 14:48:50 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:12:07.680 14:48:50 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:07.680 14:48:50 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:07.680 [ 1]:0x2 00:12:07.680 14:48:50 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:07.680 14:48:50 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:07.680 14:48:50 -- target/ns_masking.sh@40 -- # nguid=1c532573099d49f0873c57359ab034cf 00:12:07.680 14:48:50 -- target/ns_masking.sh@41 -- # [[ 1c532573099d49f0873c57359ab034cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.680 14:48:50 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:07.941 14:48:50 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:12:07.941 14:48:50 -- common/autotest_common.sh@638 -- # local es=0 00:12:07.941 14:48:50 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:12:07.941 14:48:50 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:12:07.941 14:48:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:07.941 14:48:50 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:12:07.942 14:48:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:07.942 14:48:50 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:12:07.942 14:48:50 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:07.942 14:48:50 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:07.942 14:48:50 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:07.942 14:48:50 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:07.942 14:48:50 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:07.942 14:48:50 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.942 14:48:50 -- common/autotest_common.sh@641 -- # es=1 00:12:07.942 14:48:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:07.942 14:48:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:07.942 14:48:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:07.942 14:48:50 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:12:07.942 14:48:50 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:07.942 14:48:50 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:07.942 [ 0]:0x2 00:12:07.942 14:48:50 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:07.942 14:48:50 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:07.942 14:48:50 -- target/ns_masking.sh@40 -- # nguid=1c532573099d49f0873c57359ab034cf 00:12:07.942 14:48:50 -- target/ns_masking.sh@41 -- # [[ 1c532573099d49f0873c57359ab034cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.942 14:48:50 -- target/ns_masking.sh@91 -- # disconnect 00:12:07.942 14:48:50 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:07.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.942 14:48:50 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:08.202 14:48:50 -- target/ns_masking.sh@95 -- # connect 2 00:12:08.202 14:48:50 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c07dccb0-7c43-4404-9c6f-f87d95edfcb0 -a 10.0.0.2 -s 4420 -i 4 00:12:08.202 14:48:50 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:08.202 14:48:50 -- common/autotest_common.sh@1184 -- # local i=0 00:12:08.202 14:48:50 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:08.202 14:48:50 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:12:08.202 14:48:50 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:12:08.202 14:48:50 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:10.745 14:48:52 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:10.745 14:48:52 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:10.745 14:48:52 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:10.745 14:48:52 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:12:10.745 14:48:52 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:10.745 14:48:52 -- common/autotest_common.sh@1194 -- # return 0 00:12:10.745 14:48:52 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:10.745 14:48:52 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:10.745 14:48:52 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:10.745 14:48:52 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:10.745 14:48:52 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:12:10.745 14:48:52 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:10.745 14:48:52 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:10.745 [ 0]:0x1 00:12:10.745 14:48:53 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:10.745 14:48:53 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:10.745 14:48:53 -- target/ns_masking.sh@40 -- # nguid=def859529f304aa6bf42956db56a5c90 00:12:10.745 14:48:53 -- target/ns_masking.sh@41 -- # [[ def859529f304aa6bf42956db56a5c90 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.745 14:48:53 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:12:10.745 14:48:53 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:10.745 14:48:53 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:10.745 [ 1]:0x2 00:12:10.745 14:48:53 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:10.745 14:48:53 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:10.745 14:48:53 -- target/ns_masking.sh@40 -- # nguid=1c532573099d49f0873c57359ab034cf 00:12:10.745 14:48:53 -- target/ns_masking.sh@41 -- # [[ 1c532573099d49f0873c57359ab034cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.745 14:48:53 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:10.745 14:48:53 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:12:10.745 14:48:53 -- common/autotest_common.sh@638 -- # local es=0 00:12:10.745 14:48:53 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:12:10.745 14:48:53 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:12:10.745 14:48:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:10.745 14:48:53 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:12:10.745 14:48:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:10.745 14:48:53 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:12:10.745 14:48:53 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:10.745 14:48:53 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:10.745 14:48:53 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:10.745 14:48:53 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:11.005 14:48:53 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:11.005 14:48:53 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:11.005 14:48:53 -- common/autotest_common.sh@641 -- # es=1 00:12:11.005 14:48:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:11.005 14:48:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:11.005 14:48:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:11.005 14:48:53 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:12:11.005 14:48:53 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:11.005 14:48:53 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:11.005 [ 0]:0x2 00:12:11.005 14:48:53 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:11.005 14:48:53 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:11.005 14:48:53 -- target/ns_masking.sh@40 -- # nguid=1c532573099d49f0873c57359ab034cf 00:12:11.005 14:48:53 -- target/ns_masking.sh@41 -- # [[ 1c532573099d49f0873c57359ab034cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:11.005 14:48:53 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:11.005 14:48:53 -- common/autotest_common.sh@638 -- # local es=0 00:12:11.005 14:48:53 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:11.005 14:48:53 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:11.005 14:48:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:11.005 14:48:53 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:11.005 14:48:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:11.005 14:48:53 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:11.005 14:48:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:11.005 14:48:53 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:11.005 14:48:53 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:11.005 14:48:53 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:11.005 [2024-04-26 14:48:53.619630] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:11.005 request: 00:12:11.005 { 00:12:11.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:11.005 "nsid": 2, 00:12:11.005 "host": "nqn.2016-06.io.spdk:host1", 00:12:11.005 "method": "nvmf_ns_remove_host", 00:12:11.005 "req_id": 1 00:12:11.005 } 00:12:11.005 Got JSON-RPC error response 00:12:11.005 response: 00:12:11.005 { 00:12:11.005 "code": -32602, 00:12:11.005 "message": "Invalid parameters" 00:12:11.005 } 00:12:11.005 14:48:53 -- common/autotest_common.sh@641 -- # es=1 00:12:11.005 14:48:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:11.005 14:48:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:11.005 14:48:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:11.005 14:48:53 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:12:11.005 14:48:53 -- common/autotest_common.sh@638 -- # local es=0 00:12:11.005 14:48:53 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:12:11.005 14:48:53 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:12:11.005 14:48:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:11.005 14:48:53 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:12:11.005 14:48:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:11.005 14:48:53 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:12:11.005 14:48:53 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:11.005 14:48:53 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:11.005 14:48:53 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:11.005 14:48:53 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:11.266 14:48:53 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:11.266 14:48:53 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:11.266 14:48:53 -- common/autotest_common.sh@641 -- # es=1 00:12:11.266 14:48:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:11.266 14:48:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:11.266 14:48:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:11.266 14:48:53 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:12:11.266 14:48:53 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:11.266 14:48:53 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:11.266 [ 0]:0x2 00:12:11.266 14:48:53 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:11.266 14:48:53 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:11.266 14:48:53 -- target/ns_masking.sh@40 -- # nguid=1c532573099d49f0873c57359ab034cf 00:12:11.266 14:48:53 -- target/ns_masking.sh@41 -- # [[ 1c532573099d49f0873c57359ab034cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:11.266 14:48:53 -- target/ns_masking.sh@108 -- # disconnect 00:12:11.266 14:48:53 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:11.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.266 14:48:53 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:11.525 14:48:53 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:12:11.525 14:48:53 -- target/ns_masking.sh@114 -- # nvmftestfini 00:12:11.525 14:48:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:11.525 14:48:53 -- nvmf/common.sh@117 -- # sync 00:12:11.525 14:48:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:11.525 14:48:53 -- nvmf/common.sh@120 -- # set +e 00:12:11.525 14:48:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:11.525 14:48:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:11.525 rmmod nvme_tcp 00:12:11.525 rmmod nvme_fabrics 00:12:11.525 rmmod nvme_keyring 00:12:11.525 14:48:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:11.525 14:48:54 -- nvmf/common.sh@124 -- # set -e 00:12:11.525 14:48:54 -- nvmf/common.sh@125 -- # return 0 00:12:11.525 14:48:54 -- nvmf/common.sh@478 -- # '[' -n 975779 ']' 00:12:11.525 14:48:54 -- nvmf/common.sh@479 -- # killprocess 975779 00:12:11.525 14:48:54 -- common/autotest_common.sh@936 -- # '[' -z 975779 ']' 00:12:11.525 14:48:54 -- common/autotest_common.sh@940 -- # kill -0 975779 00:12:11.525 14:48:54 -- common/autotest_common.sh@941 -- # uname 00:12:11.525 14:48:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:11.525 14:48:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 975779 00:12:11.525 14:48:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:11.525 14:48:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:11.525 14:48:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 975779' 00:12:11.525 killing process with pid 975779 00:12:11.525 14:48:54 -- common/autotest_common.sh@955 -- # kill 975779 00:12:11.525 14:48:54 -- common/autotest_common.sh@960 -- # wait 975779 00:12:11.785 14:48:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:11.785 14:48:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:11.785 14:48:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:11.785 14:48:54 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:11.785 14:48:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:11.785 14:48:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.785 14:48:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.785 14:48:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.696 14:48:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:13.696 00:12:13.696 real 0m21.033s 00:12:13.696 user 0m49.950s 00:12:13.696 sys 0m6.883s 00:12:13.696 14:48:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:13.696 14:48:56 -- common/autotest_common.sh@10 -- # set +x 00:12:13.696 ************************************ 00:12:13.696 END TEST nvmf_ns_masking 00:12:13.696 ************************************ 00:12:13.956 14:48:56 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:13.956 14:48:56 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:13.956 14:48:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:13.956 14:48:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:13.956 14:48:56 -- common/autotest_common.sh@10 -- # set +x 00:12:13.956 ************************************ 00:12:13.956 START TEST nvmf_nvme_cli 00:12:13.956 ************************************ 00:12:13.956 14:48:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:14.217 * Looking for test storage... 00:12:14.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.217 14:48:56 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.217 14:48:56 -- nvmf/common.sh@7 -- # uname -s 00:12:14.217 14:48:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.217 14:48:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.217 14:48:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.217 14:48:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.217 14:48:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.217 14:48:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.217 14:48:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.217 14:48:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.217 14:48:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.217 14:48:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.217 14:48:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:14.217 14:48:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:14.217 14:48:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.217 14:48:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.217 14:48:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.217 14:48:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.217 14:48:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.217 14:48:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.217 14:48:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.217 14:48:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.217 14:48:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.217 14:48:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.217 14:48:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.217 14:48:56 -- paths/export.sh@5 -- # export PATH 00:12:14.217 14:48:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.217 14:48:56 -- nvmf/common.sh@47 -- # : 0 00:12:14.217 14:48:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:14.218 14:48:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:14.218 14:48:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.218 14:48:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.218 14:48:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.218 14:48:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:14.218 14:48:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:14.218 14:48:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:14.218 14:48:56 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:14.218 14:48:56 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:14.218 14:48:56 -- target/nvme_cli.sh@14 -- # devs=() 00:12:14.218 14:48:56 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:14.218 14:48:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:14.218 14:48:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.218 14:48:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:14.218 14:48:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:14.218 14:48:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:14.218 14:48:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.218 14:48:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:14.218 14:48:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.218 14:48:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:14.218 14:48:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:14.218 14:48:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:14.218 14:48:56 -- common/autotest_common.sh@10 -- # set +x 00:12:22.432 14:49:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:22.432 14:49:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:22.432 14:49:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:22.432 14:49:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:22.432 14:49:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:22.432 14:49:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:22.432 14:49:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:22.432 14:49:03 -- nvmf/common.sh@295 -- # net_devs=() 00:12:22.432 14:49:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:22.432 14:49:03 -- nvmf/common.sh@296 -- # e810=() 00:12:22.432 14:49:03 -- nvmf/common.sh@296 -- # local -ga e810 00:12:22.432 14:49:03 -- nvmf/common.sh@297 -- # x722=() 00:12:22.432 14:49:03 -- nvmf/common.sh@297 -- # local -ga x722 00:12:22.432 14:49:03 -- nvmf/common.sh@298 -- # mlx=() 00:12:22.432 14:49:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:22.432 14:49:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:22.432 14:49:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:22.432 14:49:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:22.432 14:49:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:22.432 14:49:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:22.432 14:49:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:22.432 14:49:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:22.432 14:49:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:22.432 14:49:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:22.432 14:49:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:22.432 14:49:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:22.432 14:49:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:22.432 14:49:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:22.432 14:49:03 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:22.432 14:49:03 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:22.432 14:49:03 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:22.432 14:49:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:22.432 14:49:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:22.432 14:49:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:22.432 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:22.432 14:49:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:22.432 14:49:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:22.432 14:49:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.432 14:49:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.432 14:49:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:22.432 14:49:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:22.432 14:49:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:22.432 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:22.432 14:49:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:22.432 14:49:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:22.432 14:49:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.432 14:49:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.432 14:49:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:22.432 14:49:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:22.432 14:49:03 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:22.432 14:49:03 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:22.432 14:49:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:22.432 14:49:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.432 14:49:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:22.432 14:49:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.432 14:49:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:22.432 Found net devices under 0000:31:00.0: cvl_0_0 00:12:22.432 14:49:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.432 14:49:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:22.432 14:49:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.432 14:49:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:22.432 14:49:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.432 14:49:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:22.432 Found net devices under 0000:31:00.1: cvl_0_1 00:12:22.432 14:49:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.432 14:49:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:22.432 14:49:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:22.432 14:49:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:22.432 14:49:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:22.432 14:49:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:22.432 14:49:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.432 14:49:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:22.432 14:49:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:22.432 14:49:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:22.432 14:49:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:22.432 14:49:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:22.432 14:49:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:22.432 14:49:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:22.432 14:49:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.432 14:49:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:22.432 14:49:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:22.432 14:49:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:22.432 14:49:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:22.432 14:49:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:22.432 14:49:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:22.432 14:49:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:22.432 14:49:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:22.432 14:49:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:22.432 14:49:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:22.432 14:49:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:22.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:22.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.701 ms 00:12:22.432 00:12:22.432 --- 10.0.0.2 ping statistics --- 00:12:22.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.432 rtt min/avg/max/mdev = 0.701/0.701/0.701/0.000 ms 00:12:22.432 14:49:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:22.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:22.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:12:22.432 00:12:22.432 --- 10.0.0.1 ping statistics --- 00:12:22.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.432 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:12:22.432 14:49:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:22.432 14:49:03 -- nvmf/common.sh@411 -- # return 0 00:12:22.432 14:49:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:22.432 14:49:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:22.432 14:49:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:22.432 14:49:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:22.432 14:49:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:22.432 14:49:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:22.432 14:49:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:22.432 14:49:04 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:22.432 14:49:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:22.432 14:49:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:22.432 14:49:04 -- common/autotest_common.sh@10 -- # set +x 00:12:22.432 14:49:04 -- nvmf/common.sh@470 -- # nvmfpid=982470 00:12:22.432 14:49:04 -- nvmf/common.sh@471 -- # waitforlisten 982470 00:12:22.432 14:49:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:22.432 14:49:04 -- common/autotest_common.sh@817 -- # '[' -z 982470 ']' 00:12:22.432 14:49:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.432 14:49:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:22.432 14:49:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.432 14:49:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:22.432 14:49:04 -- common/autotest_common.sh@10 -- # set +x 00:12:22.432 [2024-04-26 14:49:04.093675] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:12:22.432 [2024-04-26 14:49:04.093742] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.432 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.432 [2024-04-26 14:49:04.166816] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:22.432 [2024-04-26 14:49:04.241197] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.432 [2024-04-26 14:49:04.241237] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.432 [2024-04-26 14:49:04.241246] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.432 [2024-04-26 14:49:04.241254] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.432 [2024-04-26 14:49:04.241263] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.432 [2024-04-26 14:49:04.241411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.432 [2024-04-26 14:49:04.241532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.432 [2024-04-26 14:49:04.241694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.432 [2024-04-26 14:49:04.241695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:22.432 14:49:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:22.432 14:49:04 -- common/autotest_common.sh@850 -- # return 0 00:12:22.432 14:49:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:22.433 14:49:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:22.433 14:49:04 -- common/autotest_common.sh@10 -- # set +x 00:12:22.433 14:49:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.433 14:49:04 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:22.433 14:49:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:22.433 14:49:04 -- common/autotest_common.sh@10 -- # set +x 00:12:22.433 [2024-04-26 14:49:04.921387] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:22.433 14:49:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:22.433 14:49:04 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:22.433 14:49:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:22.433 14:49:04 -- common/autotest_common.sh@10 -- # set +x 00:12:22.433 Malloc0 00:12:22.433 14:49:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:22.433 14:49:04 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:22.433 14:49:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:22.433 14:49:04 -- common/autotest_common.sh@10 -- # set +x 00:12:22.433 Malloc1 00:12:22.433 14:49:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:22.433 14:49:04 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:22.433 14:49:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:22.433 14:49:04 -- common/autotest_common.sh@10 -- # set +x 00:12:22.433 14:49:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:22.433 14:49:04 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:22.433 14:49:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:22.433 14:49:04 -- common/autotest_common.sh@10 -- # set +x 00:12:22.433 14:49:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:22.433 14:49:04 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:22.433 14:49:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:22.433 14:49:04 -- common/autotest_common.sh@10 -- # set +x 00:12:22.433 14:49:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:22.433 14:49:05 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.433 14:49:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:22.433 14:49:05 -- common/autotest_common.sh@10 -- # set +x 00:12:22.433 [2024-04-26 14:49:05.011326] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.433 14:49:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:22.433 14:49:05 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:22.433 14:49:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:22.433 14:49:05 -- common/autotest_common.sh@10 -- # set +x 00:12:22.433 14:49:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:22.433 14:49:05 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:12:22.693 00:12:22.693 Discovery Log Number of Records 2, Generation counter 2 00:12:22.693 =====Discovery Log Entry 0====== 00:12:22.693 trtype: tcp 00:12:22.693 adrfam: ipv4 00:12:22.693 subtype: current discovery subsystem 00:12:22.693 treq: not required 00:12:22.693 portid: 0 00:12:22.693 trsvcid: 4420 00:12:22.693 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:22.693 traddr: 10.0.0.2 00:12:22.693 eflags: explicit discovery connections, duplicate discovery information 00:12:22.693 sectype: none 00:12:22.693 =====Discovery Log Entry 1====== 00:12:22.693 trtype: tcp 00:12:22.693 adrfam: ipv4 00:12:22.693 subtype: nvme subsystem 00:12:22.693 treq: not required 00:12:22.693 portid: 0 00:12:22.693 trsvcid: 4420 00:12:22.693 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:22.693 traddr: 10.0.0.2 00:12:22.693 eflags: none 00:12:22.693 sectype: none 00:12:22.693 14:49:05 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:22.693 14:49:05 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:22.693 14:49:05 -- nvmf/common.sh@511 -- # local dev _ 00:12:22.693 14:49:05 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:22.693 14:49:05 -- nvmf/common.sh@510 -- # nvme list 00:12:22.693 14:49:05 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:12:22.693 14:49:05 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:22.693 14:49:05 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:12:22.693 14:49:05 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:22.693 14:49:05 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:22.693 14:49:05 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:24.075 14:49:06 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:24.075 14:49:06 -- common/autotest_common.sh@1184 -- # local i=0 00:12:24.075 14:49:06 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:24.075 14:49:06 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:12:24.075 14:49:06 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:12:24.075 14:49:06 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:25.985 14:49:08 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:26.245 14:49:08 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:26.245 14:49:08 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:26.245 14:49:08 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:12:26.245 14:49:08 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:26.245 14:49:08 -- common/autotest_common.sh@1194 -- # return 0 00:12:26.245 14:49:08 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:26.245 14:49:08 -- nvmf/common.sh@511 -- # local dev _ 00:12:26.245 14:49:08 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:26.245 14:49:08 -- nvmf/common.sh@510 -- # nvme list 00:12:26.245 14:49:08 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:12:26.245 14:49:08 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:26.245 14:49:08 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:12:26.245 14:49:08 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:26.245 14:49:08 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:26.245 14:49:08 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:12:26.245 14:49:08 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:26.245 14:49:08 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:26.245 14:49:08 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:12:26.245 14:49:08 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:26.245 14:49:08 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:26.245 /dev/nvme0n1 ]] 00:12:26.245 14:49:08 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:26.245 14:49:08 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:26.245 14:49:08 -- nvmf/common.sh@511 -- # local dev _ 00:12:26.245 14:49:08 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:26.245 14:49:08 -- nvmf/common.sh@510 -- # nvme list 00:12:26.505 14:49:08 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:12:26.505 14:49:08 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:26.505 14:49:08 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:12:26.505 14:49:08 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:26.505 14:49:08 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:26.505 14:49:08 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:12:26.505 14:49:08 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:26.505 14:49:08 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:26.505 14:49:08 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:12:26.505 14:49:08 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:26.505 14:49:08 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:26.505 14:49:08 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.802 14:49:09 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:26.802 14:49:09 -- common/autotest_common.sh@1205 -- # local i=0 00:12:26.802 14:49:09 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:26.802 14:49:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.802 14:49:09 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:26.802 14:49:09 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.802 14:49:09 -- common/autotest_common.sh@1217 -- # return 0 00:12:26.802 14:49:09 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:26.802 14:49:09 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.802 14:49:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.802 14:49:09 -- common/autotest_common.sh@10 -- # set +x 00:12:26.802 14:49:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.802 14:49:09 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:26.802 14:49:09 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:26.802 14:49:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:26.802 14:49:09 -- nvmf/common.sh@117 -- # sync 00:12:26.802 14:49:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:26.802 14:49:09 -- nvmf/common.sh@120 -- # set +e 00:12:26.802 14:49:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:26.802 14:49:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:26.802 rmmod nvme_tcp 00:12:26.802 rmmod nvme_fabrics 00:12:26.802 rmmod nvme_keyring 00:12:26.802 14:49:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:26.802 14:49:09 -- nvmf/common.sh@124 -- # set -e 00:12:26.802 14:49:09 -- nvmf/common.sh@125 -- # return 0 00:12:26.802 14:49:09 -- nvmf/common.sh@478 -- # '[' -n 982470 ']' 00:12:26.802 14:49:09 -- nvmf/common.sh@479 -- # killprocess 982470 00:12:26.802 14:49:09 -- common/autotest_common.sh@936 -- # '[' -z 982470 ']' 00:12:26.802 14:49:09 -- common/autotest_common.sh@940 -- # kill -0 982470 00:12:26.802 14:49:09 -- common/autotest_common.sh@941 -- # uname 00:12:26.802 14:49:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:26.802 14:49:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 982470 00:12:26.802 14:49:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:26.802 14:49:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:26.802 14:49:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 982470' 00:12:26.802 killing process with pid 982470 00:12:26.802 14:49:09 -- common/autotest_common.sh@955 -- # kill 982470 00:12:26.802 14:49:09 -- common/autotest_common.sh@960 -- # wait 982470 00:12:27.063 14:49:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:27.063 14:49:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:27.063 14:49:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:27.063 14:49:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:27.063 14:49:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:27.063 14:49:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.063 14:49:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.063 14:49:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.602 14:49:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:29.602 00:12:29.602 real 0m15.109s 00:12:29.602 user 0m23.402s 00:12:29.602 sys 0m6.047s 00:12:29.602 14:49:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:29.602 14:49:11 -- common/autotest_common.sh@10 -- # set +x 00:12:29.602 ************************************ 00:12:29.602 END TEST nvmf_nvme_cli 00:12:29.602 ************************************ 00:12:29.602 14:49:11 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:29.602 14:49:11 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:29.602 14:49:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:29.602 14:49:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:29.602 14:49:11 -- common/autotest_common.sh@10 -- # set +x 00:12:29.602 ************************************ 00:12:29.602 START TEST nvmf_vfio_user 00:12:29.602 ************************************ 00:12:29.602 14:49:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:29.602 * Looking for test storage... 00:12:29.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.602 14:49:11 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.602 14:49:11 -- nvmf/common.sh@7 -- # uname -s 00:12:29.602 14:49:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.602 14:49:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.602 14:49:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.602 14:49:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.602 14:49:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.602 14:49:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.602 14:49:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.602 14:49:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.602 14:49:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.602 14:49:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.602 14:49:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:29.602 14:49:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:29.602 14:49:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.602 14:49:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.602 14:49:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.602 14:49:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.602 14:49:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.602 14:49:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.602 14:49:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.602 14:49:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.602 14:49:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.602 14:49:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.602 14:49:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.602 14:49:11 -- paths/export.sh@5 -- # export PATH 00:12:29.602 14:49:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.603 14:49:11 -- nvmf/common.sh@47 -- # : 0 00:12:29.603 14:49:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:29.603 14:49:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:29.603 14:49:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.603 14:49:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.603 14:49:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.603 14:49:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:29.603 14:49:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:29.603 14:49:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:29.603 14:49:11 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:29.603 14:49:11 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:29.603 14:49:11 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:29.603 14:49:11 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:29.603 14:49:11 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:29.603 14:49:11 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:29.603 14:49:11 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:29.603 14:49:11 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:29.603 14:49:11 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:29.603 14:49:11 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:29.603 14:49:11 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=984173 00:12:29.603 14:49:11 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 984173' 00:12:29.603 Process pid: 984173 00:12:29.603 14:49:11 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:29.603 14:49:11 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 984173 00:12:29.603 14:49:11 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:29.603 14:49:11 -- common/autotest_common.sh@817 -- # '[' -z 984173 ']' 00:12:29.603 14:49:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.603 14:49:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:29.603 14:49:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.603 14:49:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:29.603 14:49:11 -- common/autotest_common.sh@10 -- # set +x 00:12:29.603 [2024-04-26 14:49:12.018978] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:12:29.603 [2024-04-26 14:49:12.019038] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.603 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.603 [2024-04-26 14:49:12.080101] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:29.603 [2024-04-26 14:49:12.143177] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.603 [2024-04-26 14:49:12.143216] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.603 [2024-04-26 14:49:12.143224] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.603 [2024-04-26 14:49:12.143232] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.603 [2024-04-26 14:49:12.143239] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.603 [2024-04-26 14:49:12.143401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.603 [2024-04-26 14:49:12.143516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.603 [2024-04-26 14:49:12.143671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.603 [2024-04-26 14:49:12.143672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:30.174 14:49:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:30.174 14:49:12 -- common/autotest_common.sh@850 -- # return 0 00:12:30.174 14:49:12 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:31.554 14:49:13 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:31.554 14:49:13 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:31.554 14:49:13 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:31.554 14:49:13 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:31.554 14:49:13 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:31.554 14:49:13 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:31.554 Malloc1 00:12:31.555 14:49:14 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:31.814 14:49:14 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:32.074 14:49:14 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:32.074 14:49:14 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:32.074 14:49:14 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:32.074 14:49:14 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:32.334 Malloc2 00:12:32.334 14:49:14 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:32.595 14:49:15 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:32.595 14:49:15 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:32.858 14:49:15 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:32.858 14:49:15 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:32.858 14:49:15 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:32.858 14:49:15 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:32.858 14:49:15 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:32.858 14:49:15 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:32.858 [2024-04-26 14:49:15.357652] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:12:32.858 [2024-04-26 14:49:15.357717] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid984867 ] 00:12:32.858 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.858 [2024-04-26 14:49:15.396441] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:32.858 [2024-04-26 14:49:15.406167] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:32.858 [2024-04-26 14:49:15.406188] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff8e21db000 00:12:32.858 [2024-04-26 14:49:15.407165] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.858 [2024-04-26 14:49:15.408176] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.858 [2024-04-26 14:49:15.409170] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.858 [2024-04-26 14:49:15.410187] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:32.858 [2024-04-26 14:49:15.411196] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:32.858 [2024-04-26 14:49:15.412199] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.858 [2024-04-26 14:49:15.413205] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:32.858 [2024-04-26 14:49:15.414211] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.858 [2024-04-26 14:49:15.415220] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:32.858 [2024-04-26 14:49:15.415233] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff8e21d0000 00:12:32.858 [2024-04-26 14:49:15.416563] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:32.858 [2024-04-26 14:49:15.436465] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:32.858 [2024-04-26 14:49:15.436487] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:32.858 [2024-04-26 14:49:15.439358] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:32.858 [2024-04-26 14:49:15.439402] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:32.858 [2024-04-26 14:49:15.439484] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:32.858 [2024-04-26 14:49:15.439502] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:32.858 [2024-04-26 14:49:15.439508] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:32.858 [2024-04-26 14:49:15.440360] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:32.858 [2024-04-26 14:49:15.440372] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:32.858 [2024-04-26 14:49:15.440379] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:32.858 [2024-04-26 14:49:15.441362] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:32.858 [2024-04-26 14:49:15.441370] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:32.858 [2024-04-26 14:49:15.441378] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:32.858 [2024-04-26 14:49:15.442366] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:32.858 [2024-04-26 14:49:15.442374] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:32.858 [2024-04-26 14:49:15.443366] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:32.858 [2024-04-26 14:49:15.443374] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:32.858 [2024-04-26 14:49:15.443379] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:32.858 [2024-04-26 14:49:15.443385] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:32.858 [2024-04-26 14:49:15.443491] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:32.858 [2024-04-26 14:49:15.443495] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:32.858 [2024-04-26 14:49:15.443500] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:32.858 [2024-04-26 14:49:15.444375] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:32.858 [2024-04-26 14:49:15.445379] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:32.858 [2024-04-26 14:49:15.446388] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:32.858 [2024-04-26 14:49:15.447384] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:32.858 [2024-04-26 14:49:15.447438] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:32.858 [2024-04-26 14:49:15.448400] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:32.858 [2024-04-26 14:49:15.448407] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:32.858 [2024-04-26 14:49:15.448412] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:32.858 [2024-04-26 14:49:15.448433] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:32.858 [2024-04-26 14:49:15.448445] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:32.858 [2024-04-26 14:49:15.448462] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:32.858 [2024-04-26 14:49:15.448467] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:32.858 [2024-04-26 14:49:15.448480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:32.858 [2024-04-26 14:49:15.448515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:32.858 [2024-04-26 14:49:15.448524] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:32.858 [2024-04-26 14:49:15.448529] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:32.858 [2024-04-26 14:49:15.448534] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:32.858 [2024-04-26 14:49:15.448538] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:32.858 [2024-04-26 14:49:15.448543] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:32.858 [2024-04-26 14:49:15.448547] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:32.858 [2024-04-26 14:49:15.448552] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:32.858 [2024-04-26 14:49:15.448560] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:32.858 [2024-04-26 14:49:15.448570] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:32.858 [2024-04-26 14:49:15.448579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:32.858 [2024-04-26 14:49:15.448592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.858 [2024-04-26 14:49:15.448600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.858 [2024-04-26 14:49:15.448609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.858 [2024-04-26 14:49:15.448617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.858 [2024-04-26 14:49:15.448621] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:32.858 [2024-04-26 14:49:15.448630] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:32.858 [2024-04-26 14:49:15.448638] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:32.858 [2024-04-26 14:49:15.448648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:32.858 [2024-04-26 14:49:15.448653] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:32.858 [2024-04-26 14:49:15.448658] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:32.858 [2024-04-26 14:49:15.448668] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:32.858 [2024-04-26 14:49:15.448674] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:32.858 [2024-04-26 14:49:15.448684] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:32.858 [2024-04-26 14:49:15.448693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:32.858 [2024-04-26 14:49:15.448742] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:32.859 [2024-04-26 14:49:15.448749] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:32.859 [2024-04-26 14:49:15.448757] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:32.859 [2024-04-26 14:49:15.448761] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:32.859 [2024-04-26 14:49:15.448767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:32.859 [2024-04-26 14:49:15.448776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:32.859 [2024-04-26 14:49:15.448786] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:32.859 [2024-04-26 14:49:15.448793] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:32.859 [2024-04-26 14:49:15.448801] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:32.859 [2024-04-26 14:49:15.448808] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:32.859 [2024-04-26 14:49:15.448812] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:32.859 [2024-04-26 14:49:15.448818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:32.859 [2024-04-26 14:49:15.448833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:32.859 [2024-04-26 14:49:15.448850] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:32.859 [2024-04-26 14:49:15.448858] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:32.859 [2024-04-26 14:49:15.448865] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:32.859 [2024-04-26 14:49:15.448869] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:32.859 [2024-04-26 14:49:15.448875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:32.859 [2024-04-26 14:49:15.448888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:32.859 [2024-04-26 14:49:15.448896] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:32.859 [2024-04-26 14:49:15.448903] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:32.859 [2024-04-26 14:49:15.448910] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:32.859 [2024-04-26 14:49:15.448915] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:32.859 [2024-04-26 14:49:15.448922] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:32.859 [2024-04-26 14:49:15.448927] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:32.859 [2024-04-26 14:49:15.448932] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:32.859 [2024-04-26 14:49:15.448937] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:32.859 [2024-04-26 14:49:15.448953] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:32.859 [2024-04-26 14:49:15.448963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:32.859 [2024-04-26 14:49:15.448974] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:32.859 [2024-04-26 14:49:15.448985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:32.859 [2024-04-26 14:49:15.448996] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:32.859 [2024-04-26 14:49:15.449007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:32.859 [2024-04-26 14:49:15.449018] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:32.859 [2024-04-26 14:49:15.449025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:32.859 [2024-04-26 14:49:15.449035] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:32.859 [2024-04-26 14:49:15.449040] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:32.859 [2024-04-26 14:49:15.449043] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:32.859 [2024-04-26 14:49:15.449047] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:32.859 [2024-04-26 14:49:15.449053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:32.859 [2024-04-26 14:49:15.449060] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:32.859 [2024-04-26 14:49:15.449065] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:32.859 [2024-04-26 14:49:15.449071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:32.859 [2024-04-26 14:49:15.449078] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:32.859 [2024-04-26 14:49:15.449082] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:32.859 [2024-04-26 14:49:15.449088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:32.859 [2024-04-26 14:49:15.449095] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:32.859 [2024-04-26 14:49:15.449099] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:32.859 [2024-04-26 14:49:15.449105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:32.859 [2024-04-26 14:49:15.449112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:32.859 [2024-04-26 14:49:15.449126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:32.859 [2024-04-26 14:49:15.449135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:32.859 [2024-04-26 14:49:15.449142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:32.859 ===================================================== 00:12:32.859 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:32.859 ===================================================== 00:12:32.859 Controller Capabilities/Features 00:12:32.859 ================================ 00:12:32.859 Vendor ID: 4e58 00:12:32.859 Subsystem Vendor ID: 4e58 00:12:32.859 Serial Number: SPDK1 00:12:32.859 Model Number: SPDK bdev Controller 00:12:32.859 Firmware Version: 24.05 00:12:32.859 Recommended Arb Burst: 6 00:12:32.859 IEEE OUI Identifier: 8d 6b 50 00:12:32.859 Multi-path I/O 00:12:32.859 May have multiple subsystem ports: Yes 00:12:32.859 May have multiple controllers: Yes 00:12:32.859 Associated with SR-IOV VF: No 00:12:32.859 Max Data Transfer Size: 131072 00:12:32.859 Max Number of Namespaces: 32 00:12:32.859 Max Number of I/O Queues: 127 00:12:32.859 NVMe Specification Version (VS): 1.3 00:12:32.859 NVMe Specification Version (Identify): 1.3 00:12:32.859 Maximum Queue Entries: 256 00:12:32.859 Contiguous Queues Required: Yes 00:12:32.859 Arbitration Mechanisms Supported 00:12:32.859 Weighted Round Robin: Not Supported 00:12:32.859 Vendor Specific: Not Supported 00:12:32.859 Reset Timeout: 15000 ms 00:12:32.859 Doorbell Stride: 4 bytes 00:12:32.859 NVM Subsystem Reset: Not Supported 00:12:32.859 Command Sets Supported 00:12:32.859 NVM Command Set: Supported 00:12:32.859 Boot Partition: Not Supported 00:12:32.859 Memory Page Size Minimum: 4096 bytes 00:12:32.859 Memory Page Size Maximum: 4096 bytes 00:12:32.859 Persistent Memory Region: Not Supported 00:12:32.859 Optional Asynchronous Events Supported 00:12:32.859 Namespace Attribute Notices: Supported 00:12:32.859 Firmware Activation Notices: Not Supported 00:12:32.859 ANA Change Notices: Not Supported 00:12:32.859 PLE Aggregate Log Change Notices: Not Supported 00:12:32.859 LBA Status Info Alert Notices: Not Supported 00:12:32.859 EGE Aggregate Log Change Notices: Not Supported 00:12:32.859 Normal NVM Subsystem Shutdown event: Not Supported 00:12:32.859 Zone Descriptor Change Notices: Not Supported 00:12:32.859 Discovery Log Change Notices: Not Supported 00:12:32.859 Controller Attributes 00:12:32.859 128-bit Host Identifier: Supported 00:12:32.859 Non-Operational Permissive Mode: Not Supported 00:12:32.859 NVM Sets: Not Supported 00:12:32.859 Read Recovery Levels: Not Supported 00:12:32.859 Endurance Groups: Not Supported 00:12:32.859 Predictable Latency Mode: Not Supported 00:12:32.859 Traffic Based Keep ALive: Not Supported 00:12:32.859 Namespace Granularity: Not Supported 00:12:32.859 SQ Associations: Not Supported 00:12:32.859 UUID List: Not Supported 00:12:32.859 Multi-Domain Subsystem: Not Supported 00:12:32.859 Fixed Capacity Management: Not Supported 00:12:32.859 Variable Capacity Management: Not Supported 00:12:32.859 Delete Endurance Group: Not Supported 00:12:32.859 Delete NVM Set: Not Supported 00:12:32.859 Extended LBA Formats Supported: Not Supported 00:12:32.859 Flexible Data Placement Supported: Not Supported 00:12:32.859 00:12:32.859 Controller Memory Buffer Support 00:12:32.859 ================================ 00:12:32.859 Supported: No 00:12:32.859 00:12:32.859 Persistent Memory Region Support 00:12:32.859 ================================ 00:12:32.859 Supported: No 00:12:32.859 00:12:32.860 Admin Command Set Attributes 00:12:32.860 ============================ 00:12:32.860 Security Send/Receive: Not Supported 00:12:32.860 Format NVM: Not Supported 00:12:32.860 Firmware Activate/Download: Not Supported 00:12:32.860 Namespace Management: Not Supported 00:12:32.860 Device Self-Test: Not Supported 00:12:32.860 Directives: Not Supported 00:12:32.860 NVMe-MI: Not Supported 00:12:32.860 Virtualization Management: Not Supported 00:12:32.860 Doorbell Buffer Config: Not Supported 00:12:32.860 Get LBA Status Capability: Not Supported 00:12:32.860 Command & Feature Lockdown Capability: Not Supported 00:12:32.860 Abort Command Limit: 4 00:12:32.860 Async Event Request Limit: 4 00:12:32.860 Number of Firmware Slots: N/A 00:12:32.860 Firmware Slot 1 Read-Only: N/A 00:12:32.860 Firmware Activation Without Reset: N/A 00:12:32.860 Multiple Update Detection Support: N/A 00:12:32.860 Firmware Update Granularity: No Information Provided 00:12:32.860 Per-Namespace SMART Log: No 00:12:32.860 Asymmetric Namespace Access Log Page: Not Supported 00:12:32.860 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:32.860 Command Effects Log Page: Supported 00:12:32.860 Get Log Page Extended Data: Supported 00:12:32.860 Telemetry Log Pages: Not Supported 00:12:32.860 Persistent Event Log Pages: Not Supported 00:12:32.860 Supported Log Pages Log Page: May Support 00:12:32.860 Commands Supported & Effects Log Page: Not Supported 00:12:32.860 Feature Identifiers & Effects Log Page:May Support 00:12:32.860 NVMe-MI Commands & Effects Log Page: May Support 00:12:32.860 Data Area 4 for Telemetry Log: Not Supported 00:12:32.860 Error Log Page Entries Supported: 128 00:12:32.860 Keep Alive: Supported 00:12:32.860 Keep Alive Granularity: 10000 ms 00:12:32.860 00:12:32.860 NVM Command Set Attributes 00:12:32.860 ========================== 00:12:32.860 Submission Queue Entry Size 00:12:32.860 Max: 64 00:12:32.860 Min: 64 00:12:32.860 Completion Queue Entry Size 00:12:32.860 Max: 16 00:12:32.860 Min: 16 00:12:32.860 Number of Namespaces: 32 00:12:32.860 Compare Command: Supported 00:12:32.860 Write Uncorrectable Command: Not Supported 00:12:32.860 Dataset Management Command: Supported 00:12:32.860 Write Zeroes Command: Supported 00:12:32.860 Set Features Save Field: Not Supported 00:12:32.860 Reservations: Not Supported 00:12:32.860 Timestamp: Not Supported 00:12:32.860 Copy: Supported 00:12:32.860 Volatile Write Cache: Present 00:12:32.860 Atomic Write Unit (Normal): 1 00:12:32.860 Atomic Write Unit (PFail): 1 00:12:32.860 Atomic Compare & Write Unit: 1 00:12:32.860 Fused Compare & Write: Supported 00:12:32.860 Scatter-Gather List 00:12:32.860 SGL Command Set: Supported (Dword aligned) 00:12:32.860 SGL Keyed: Not Supported 00:12:32.860 SGL Bit Bucket Descriptor: Not Supported 00:12:32.860 SGL Metadata Pointer: Not Supported 00:12:32.860 Oversized SGL: Not Supported 00:12:32.860 SGL Metadata Address: Not Supported 00:12:32.860 SGL Offset: Not Supported 00:12:32.860 Transport SGL Data Block: Not Supported 00:12:32.860 Replay Protected Memory Block: Not Supported 00:12:32.860 00:12:32.860 Firmware Slot Information 00:12:32.860 ========================= 00:12:32.860 Active slot: 1 00:12:32.860 Slot 1 Firmware Revision: 24.05 00:12:32.860 00:12:32.860 00:12:32.860 Commands Supported and Effects 00:12:32.860 ============================== 00:12:32.860 Admin Commands 00:12:32.860 -------------- 00:12:32.860 Get Log Page (02h): Supported 00:12:32.860 Identify (06h): Supported 00:12:32.860 Abort (08h): Supported 00:12:32.860 Set Features (09h): Supported 00:12:32.860 Get Features (0Ah): Supported 00:12:32.860 Asynchronous Event Request (0Ch): Supported 00:12:32.860 Keep Alive (18h): Supported 00:12:32.860 I/O Commands 00:12:32.860 ------------ 00:12:32.860 Flush (00h): Supported LBA-Change 00:12:32.860 Write (01h): Supported LBA-Change 00:12:32.860 Read (02h): Supported 00:12:32.860 Compare (05h): Supported 00:12:32.860 Write Zeroes (08h): Supported LBA-Change 00:12:32.860 Dataset Management (09h): Supported LBA-Change 00:12:32.860 Copy (19h): Supported LBA-Change 00:12:32.860 Unknown (79h): Supported LBA-Change 00:12:32.860 Unknown (7Ah): Supported 00:12:32.860 00:12:32.860 Error Log 00:12:32.860 ========= 00:12:32.860 00:12:32.860 Arbitration 00:12:32.860 =========== 00:12:32.860 Arbitration Burst: 1 00:12:32.860 00:12:32.860 Power Management 00:12:32.860 ================ 00:12:32.860 Number of Power States: 1 00:12:32.860 Current Power State: Power State #0 00:12:32.860 Power State #0: 00:12:32.860 Max Power: 0.00 W 00:12:32.860 Non-Operational State: Operational 00:12:32.860 Entry Latency: Not Reported 00:12:32.860 Exit Latency: Not Reported 00:12:32.860 Relative Read Throughput: 0 00:12:32.860 Relative Read Latency: 0 00:12:32.860 Relative Write Throughput: 0 00:12:32.860 Relative Write Latency: 0 00:12:32.860 Idle Power: Not Reported 00:12:32.860 Active Power: Not Reported 00:12:32.860 Non-Operational Permissive Mode: Not Supported 00:12:32.860 00:12:32.860 Health Information 00:12:32.860 ================== 00:12:32.860 Critical Warnings: 00:12:32.860 Available Spare Space: OK 00:12:32.860 Temperature: OK 00:12:32.860 Device Reliability: OK 00:12:32.860 Read Only: No 00:12:32.860 Volatile Memory Backup: OK 00:12:32.860 Current Temperature: 0 Kelvin (-2[2024-04-26 14:49:15.449246] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:32.860 [2024-04-26 14:49:15.449255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:32.860 [2024-04-26 14:49:15.449283] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:32.860 [2024-04-26 14:49:15.449293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.860 [2024-04-26 14:49:15.449299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.860 [2024-04-26 14:49:15.449305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.860 [2024-04-26 14:49:15.449312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.860 [2024-04-26 14:49:15.449411] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:32.860 [2024-04-26 14:49:15.449420] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:32.860 [2024-04-26 14:49:15.450410] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:32.860 [2024-04-26 14:49:15.450448] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:32.860 [2024-04-26 14:49:15.450454] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:32.860 [2024-04-26 14:49:15.451419] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:32.860 [2024-04-26 14:49:15.451429] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:32.860 [2024-04-26 14:49:15.451490] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:32.860 [2024-04-26 14:49:15.455845] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:32.860 73 Celsius) 00:12:32.860 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:32.860 Available Spare: 0% 00:12:32.860 Available Spare Threshold: 0% 00:12:32.860 Life Percentage Used: 0% 00:12:32.860 Data Units Read: 0 00:12:32.860 Data Units Written: 0 00:12:32.860 Host Read Commands: 0 00:12:32.860 Host Write Commands: 0 00:12:32.860 Controller Busy Time: 0 minutes 00:12:32.860 Power Cycles: 0 00:12:32.860 Power On Hours: 0 hours 00:12:32.860 Unsafe Shutdowns: 0 00:12:32.860 Unrecoverable Media Errors: 0 00:12:32.860 Lifetime Error Log Entries: 0 00:12:32.860 Warning Temperature Time: 0 minutes 00:12:32.860 Critical Temperature Time: 0 minutes 00:12:32.860 00:12:32.860 Number of Queues 00:12:32.860 ================ 00:12:32.860 Number of I/O Submission Queues: 127 00:12:32.860 Number of I/O Completion Queues: 127 00:12:32.860 00:12:32.860 Active Namespaces 00:12:32.860 ================= 00:12:32.860 Namespace ID:1 00:12:32.860 Error Recovery Timeout: Unlimited 00:12:32.860 Command Set Identifier: NVM (00h) 00:12:32.860 Deallocate: Supported 00:12:32.860 Deallocated/Unwritten Error: Not Supported 00:12:32.860 Deallocated Read Value: Unknown 00:12:32.860 Deallocate in Write Zeroes: Not Supported 00:12:32.860 Deallocated Guard Field: 0xFFFF 00:12:32.860 Flush: Supported 00:12:32.860 Reservation: Supported 00:12:32.860 Namespace Sharing Capabilities: Multiple Controllers 00:12:32.860 Size (in LBAs): 131072 (0GiB) 00:12:32.860 Capacity (in LBAs): 131072 (0GiB) 00:12:32.860 Utilization (in LBAs): 131072 (0GiB) 00:12:32.860 NGUID: 3ABE128468484EFF9F464C011465E264 00:12:32.861 UUID: 3abe1284-6848-4eff-9f46-4c011465e264 00:12:32.861 Thin Provisioning: Not Supported 00:12:32.861 Per-NS Atomic Units: Yes 00:12:32.861 Atomic Boundary Size (Normal): 0 00:12:32.861 Atomic Boundary Size (PFail): 0 00:12:32.861 Atomic Boundary Offset: 0 00:12:32.861 Maximum Single Source Range Length: 65535 00:12:32.861 Maximum Copy Length: 65535 00:12:32.861 Maximum Source Range Count: 1 00:12:32.861 NGUID/EUI64 Never Reused: No 00:12:32.861 Namespace Write Protected: No 00:12:32.861 Number of LBA Formats: 1 00:12:32.861 Current LBA Format: LBA Format #00 00:12:32.861 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:32.861 00:12:32.861 14:49:15 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:33.122 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.122 [2024-04-26 14:49:15.639450] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:38.409 [2024-04-26 14:49:20.656173] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:38.409 Initializing NVMe Controllers 00:12:38.409 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:38.409 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:38.409 Initialization complete. Launching workers. 00:12:38.409 ======================================================== 00:12:38.409 Latency(us) 00:12:38.409 Device Information : IOPS MiB/s Average min max 00:12:38.409 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39922.23 155.95 3205.91 851.14 7797.53 00:12:38.409 ======================================================== 00:12:38.409 Total : 39922.23 155.95 3205.91 851.14 7797.53 00:12:38.409 00:12:38.409 14:49:20 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:38.409 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.409 [2024-04-26 14:49:20.832009] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:43.700 [2024-04-26 14:49:25.866538] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:43.700 Initializing NVMe Controllers 00:12:43.700 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:43.700 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:43.700 Initialization complete. Launching workers. 00:12:43.700 ======================================================== 00:12:43.700 Latency(us) 00:12:43.700 Device Information : IOPS MiB/s Average min max 00:12:43.700 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16040.73 62.66 7979.18 6005.03 9951.58 00:12:43.700 ======================================================== 00:12:43.700 Total : 16040.73 62.66 7979.18 6005.03 9951.58 00:12:43.700 00:12:43.700 14:49:25 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:43.700 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.700 [2024-04-26 14:49:26.056392] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:48.987 [2024-04-26 14:49:31.147112] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:48.987 Initializing NVMe Controllers 00:12:48.987 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:48.987 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:48.987 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:48.987 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:48.987 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:48.987 Initialization complete. Launching workers. 00:12:48.987 Starting thread on core 2 00:12:48.987 Starting thread on core 3 00:12:48.987 Starting thread on core 1 00:12:48.987 14:49:31 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:48.987 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.987 [2024-04-26 14:49:31.402236] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:53.201 [2024-04-26 14:49:35.223987] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:53.201 Initializing NVMe Controllers 00:12:53.201 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:53.201 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:53.201 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:53.201 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:53.201 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:53.201 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:53.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:53.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:53.201 Initialization complete. Launching workers. 00:12:53.201 Starting thread on core 1 with urgent priority queue 00:12:53.201 Starting thread on core 2 with urgent priority queue 00:12:53.201 Starting thread on core 3 with urgent priority queue 00:12:53.201 Starting thread on core 0 with urgent priority queue 00:12:53.201 SPDK bdev Controller (SPDK1 ) core 0: 6072.67 IO/s 16.47 secs/100000 ios 00:12:53.201 SPDK bdev Controller (SPDK1 ) core 1: 4643.00 IO/s 21.54 secs/100000 ios 00:12:53.201 SPDK bdev Controller (SPDK1 ) core 2: 3093.67 IO/s 32.32 secs/100000 ios 00:12:53.201 SPDK bdev Controller (SPDK1 ) core 3: 3982.67 IO/s 25.11 secs/100000 ios 00:12:53.201 ======================================================== 00:12:53.201 00:12:53.201 14:49:35 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:53.201 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.201 [2024-04-26 14:49:35.483250] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:53.201 [2024-04-26 14:49:35.520475] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:53.201 Initializing NVMe Controllers 00:12:53.201 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:53.201 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:53.201 Namespace ID: 1 size: 0GB 00:12:53.201 Initialization complete. 00:12:53.201 INFO: using host memory buffer for IO 00:12:53.201 Hello world! 00:12:53.201 14:49:35 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:53.201 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.201 [2024-04-26 14:49:35.780263] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:54.139 Initializing NVMe Controllers 00:12:54.139 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:54.139 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:54.139 Initialization complete. Launching workers. 00:12:54.139 submit (in ns) avg, min, max = 8254.7, 3928.3, 4000066.7 00:12:54.139 complete (in ns) avg, min, max = 17597.6, 2391.7, 3999164.2 00:12:54.139 00:12:54.139 Submit histogram 00:12:54.139 ================ 00:12:54.139 Range in us Cumulative Count 00:12:54.139 3.920 - 3.947: 0.0311% ( 6) 00:12:54.139 3.947 - 3.973: 0.6887% ( 127) 00:12:54.139 3.973 - 4.000: 4.1839% ( 675) 00:12:54.139 4.000 - 4.027: 10.5944% ( 1238) 00:12:54.139 4.027 - 4.053: 20.7125% ( 1954) 00:12:54.139 4.053 - 4.080: 32.9536% ( 2364) 00:12:54.139 4.080 - 4.107: 45.7073% ( 2463) 00:12:54.139 4.107 - 4.133: 63.7324% ( 3481) 00:12:54.139 4.133 - 4.160: 78.9872% ( 2946) 00:12:54.139 4.160 - 4.187: 89.1829% ( 1969) 00:12:54.139 4.187 - 4.213: 95.3138% ( 1184) 00:12:54.139 4.213 - 4.240: 97.9132% ( 502) 00:12:54.139 4.240 - 4.267: 98.9385% ( 198) 00:12:54.139 4.267 - 4.293: 99.3372% ( 77) 00:12:54.139 4.293 - 4.320: 99.4718% ( 26) 00:12:54.139 4.320 - 4.347: 99.4874% ( 3) 00:12:54.139 4.347 - 4.373: 99.4925% ( 1) 00:12:54.139 4.400 - 4.427: 99.4977% ( 1) 00:12:54.139 4.533 - 4.560: 99.5081% ( 2) 00:12:54.139 4.800 - 4.827: 99.5133% ( 1) 00:12:54.139 4.960 - 4.987: 99.5184% ( 1) 00:12:54.139 4.987 - 5.013: 99.5236% ( 1) 00:12:54.139 5.200 - 5.227: 99.5288% ( 1) 00:12:54.139 5.227 - 5.253: 99.5340% ( 1) 00:12:54.139 5.680 - 5.707: 99.5443% ( 2) 00:12:54.139 5.867 - 5.893: 99.5495% ( 1) 00:12:54.139 5.947 - 5.973: 99.5547% ( 1) 00:12:54.139 5.973 - 6.000: 99.5702% ( 3) 00:12:54.139 6.000 - 6.027: 99.5754% ( 1) 00:12:54.139 6.027 - 6.053: 99.5806% ( 1) 00:12:54.139 6.053 - 6.080: 99.5857% ( 1) 00:12:54.139 6.080 - 6.107: 99.5909% ( 1) 00:12:54.139 6.107 - 6.133: 99.6013% ( 2) 00:12:54.139 6.133 - 6.160: 99.6116% ( 2) 00:12:54.139 6.160 - 6.187: 99.6220% ( 2) 00:12:54.139 6.187 - 6.213: 99.6324% ( 2) 00:12:54.139 6.213 - 6.240: 99.6582% ( 5) 00:12:54.139 6.240 - 6.267: 99.6686% ( 2) 00:12:54.139 6.293 - 6.320: 99.6790% ( 2) 00:12:54.139 6.320 - 6.347: 99.6893% ( 2) 00:12:54.139 6.347 - 6.373: 99.6945% ( 1) 00:12:54.139 6.373 - 6.400: 99.7048% ( 2) 00:12:54.139 6.400 - 6.427: 99.7152% ( 2) 00:12:54.139 6.427 - 6.453: 99.7204% ( 1) 00:12:54.139 6.453 - 6.480: 99.7307% ( 2) 00:12:54.139 6.480 - 6.507: 99.7463% ( 3) 00:12:54.139 6.507 - 6.533: 99.7566% ( 2) 00:12:54.139 6.560 - 6.587: 99.7618% ( 1) 00:12:54.139 6.587 - 6.613: 99.7670% ( 1) 00:12:54.139 6.613 - 6.640: 99.7722% ( 1) 00:12:54.139 6.640 - 6.667: 99.7773% ( 1) 00:12:54.139 6.667 - 6.693: 99.7825% ( 1) 00:12:54.139 6.720 - 6.747: 99.7877% ( 1) 00:12:54.139 6.747 - 6.773: 99.7929% ( 1) 00:12:54.139 6.773 - 6.800: 99.7981% ( 1) 00:12:54.139 6.827 - 6.880: 99.8084% ( 2) 00:12:54.140 6.880 - 6.933: 99.8136% ( 1) 00:12:54.140 6.933 - 6.987: 99.8239% ( 2) 00:12:54.140 6.987 - 7.040: 99.8291% ( 1) 00:12:54.140 7.040 - 7.093: 99.8447% ( 3) 00:12:54.140 7.307 - 7.360: 99.8550% ( 2) 00:12:54.140 7.627 - 7.680: 99.8654% ( 2) 00:12:54.140 7.733 - 7.787: 99.8705% ( 1) 00:12:54.140 8.000 - 8.053: 99.8757% ( 1) 00:12:54.140 8.640 - 8.693: 99.8809% ( 1) 00:12:54.140 8.747 - 8.800: 99.8861% ( 1) 00:12:54.140 9.920 - 9.973: 99.8913% ( 1) 00:12:54.140 11.467 - 11.520: 99.8964% ( 1) 00:12:54.140 3986.773 - 4014.080: 100.0000% ( 20) 00:12:54.140 00:12:54.140 Complete histogram 00:12:54.140 ================== 00:12:54.140 Range in us Cumulative Count 00:12:54.140 2.387 - 2.400: 0.0104% ( 2) 00:12:54.140 2.400 - 2.413: 0.7767% ( 148) 00:12:54.140 2.413 - 2.427: 1.0356% ( 50) 00:12:54.140 2.427 - 2.440: 7.8500% ( 1316) 00:12:54.140 2.440 - 2.453: 16.2645% ( 1625) 00:12:54.140 2.453 - 2.467: 36.4229% ( 3893) 00:12:54.140 2.467 - 2.480: 65.1926% ( 5556) 00:12:54.140 2.480 - 2.493: 74.3113% ( 1761) 00:12:54.140 2.493 - [2024-04-26 14:49:36.802762] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:54.399 2.507: 81.0636% ( 1304) 00:12:54.399 2.507 - 2.520: 83.5284% ( 476) 00:12:54.399 2.520 - 2.533: 85.7343% ( 426) 00:12:54.399 2.533 - 2.547: 90.4567% ( 912) 00:12:54.399 2.547 - 2.560: 95.2517% ( 926) 00:12:54.399 2.560 - 2.573: 97.7941% ( 491) 00:12:54.399 2.573 - 2.587: 98.9592% ( 225) 00:12:54.400 2.587 - 2.600: 99.2958% ( 65) 00:12:54.400 2.600 - 2.613: 99.3372% ( 8) 00:12:54.400 2.613 - 2.627: 99.3527% ( 3) 00:12:54.400 2.627 - 2.640: 99.3631% ( 2) 00:12:54.400 2.680 - 2.693: 99.3683% ( 1) 00:12:54.400 4.133 - 4.160: 99.3734% ( 1) 00:12:54.400 4.240 - 4.267: 99.3786% ( 1) 00:12:54.400 4.373 - 4.400: 99.3838% ( 1) 00:12:54.400 4.400 - 4.427: 99.3890% ( 1) 00:12:54.400 4.427 - 4.453: 99.3993% ( 2) 00:12:54.400 4.587 - 4.613: 99.4045% ( 1) 00:12:54.400 4.613 - 4.640: 99.4097% ( 1) 00:12:54.400 4.667 - 4.693: 99.4149% ( 1) 00:12:54.400 4.693 - 4.720: 99.4200% ( 1) 00:12:54.400 4.720 - 4.747: 99.4252% ( 1) 00:12:54.400 4.747 - 4.773: 99.4304% ( 1) 00:12:54.400 4.773 - 4.800: 99.4408% ( 2) 00:12:54.400 4.827 - 4.853: 99.4511% ( 2) 00:12:54.400 4.853 - 4.880: 99.4770% ( 5) 00:12:54.400 4.880 - 4.907: 99.4822% ( 1) 00:12:54.400 5.040 - 5.067: 99.4874% ( 1) 00:12:54.400 5.067 - 5.093: 99.4925% ( 1) 00:12:54.400 5.093 - 5.120: 99.4977% ( 1) 00:12:54.400 5.173 - 5.200: 99.5029% ( 1) 00:12:54.400 5.200 - 5.227: 99.5081% ( 1) 00:12:54.400 5.333 - 5.360: 99.5133% ( 1) 00:12:54.400 5.360 - 5.387: 99.5236% ( 2) 00:12:54.400 5.387 - 5.413: 99.5288% ( 1) 00:12:54.400 5.440 - 5.467: 99.5340% ( 1) 00:12:54.400 5.493 - 5.520: 99.5391% ( 1) 00:12:54.400 5.707 - 5.733: 99.5495% ( 2) 00:12:54.400 5.760 - 5.787: 99.5547% ( 1) 00:12:54.400 5.973 - 6.000: 99.5599% ( 1) 00:12:54.400 6.160 - 6.187: 99.5650% ( 1) 00:12:54.400 6.240 - 6.267: 99.5702% ( 1) 00:12:54.400 6.400 - 6.427: 99.5806% ( 2) 00:12:54.400 6.827 - 6.880: 99.5857% ( 1) 00:12:54.400 7.947 - 8.000: 99.5909% ( 1) 00:12:54.400 10.293 - 10.347: 99.5961% ( 1) 00:12:54.400 10.933 - 10.987: 99.6013% ( 1) 00:12:54.400 11.413 - 11.467: 99.6065% ( 1) 00:12:54.400 11.787 - 11.840: 99.6116% ( 1) 00:12:54.400 43.733 - 43.947: 99.6168% ( 1) 00:12:54.400 153.600 - 154.453: 99.6220% ( 1) 00:12:54.400 3986.773 - 4014.080: 100.0000% ( 73) 00:12:54.400 00:12:54.400 14:49:36 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:54.400 14:49:36 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:54.400 14:49:36 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:54.400 14:49:36 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:54.400 14:49:36 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:54.400 [2024-04-26 14:49:36.990017] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:12:54.400 [ 00:12:54.400 { 00:12:54.400 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:54.400 "subtype": "Discovery", 00:12:54.400 "listen_addresses": [], 00:12:54.400 "allow_any_host": true, 00:12:54.400 "hosts": [] 00:12:54.400 }, 00:12:54.400 { 00:12:54.400 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:54.400 "subtype": "NVMe", 00:12:54.400 "listen_addresses": [ 00:12:54.400 { 00:12:54.400 "transport": "VFIOUSER", 00:12:54.400 "trtype": "VFIOUSER", 00:12:54.400 "adrfam": "IPv4", 00:12:54.400 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:54.400 "trsvcid": "0" 00:12:54.400 } 00:12:54.400 ], 00:12:54.400 "allow_any_host": true, 00:12:54.400 "hosts": [], 00:12:54.400 "serial_number": "SPDK1", 00:12:54.400 "model_number": "SPDK bdev Controller", 00:12:54.400 "max_namespaces": 32, 00:12:54.400 "min_cntlid": 1, 00:12:54.400 "max_cntlid": 65519, 00:12:54.400 "namespaces": [ 00:12:54.400 { 00:12:54.400 "nsid": 1, 00:12:54.400 "bdev_name": "Malloc1", 00:12:54.400 "name": "Malloc1", 00:12:54.400 "nguid": "3ABE128468484EFF9F464C011465E264", 00:12:54.400 "uuid": "3abe1284-6848-4eff-9f46-4c011465e264" 00:12:54.400 } 00:12:54.400 ] 00:12:54.400 }, 00:12:54.400 { 00:12:54.400 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:54.400 "subtype": "NVMe", 00:12:54.400 "listen_addresses": [ 00:12:54.400 { 00:12:54.400 "transport": "VFIOUSER", 00:12:54.400 "trtype": "VFIOUSER", 00:12:54.400 "adrfam": "IPv4", 00:12:54.400 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:54.400 "trsvcid": "0" 00:12:54.400 } 00:12:54.400 ], 00:12:54.400 "allow_any_host": true, 00:12:54.400 "hosts": [], 00:12:54.400 "serial_number": "SPDK2", 00:12:54.400 "model_number": "SPDK bdev Controller", 00:12:54.400 "max_namespaces": 32, 00:12:54.400 "min_cntlid": 1, 00:12:54.400 "max_cntlid": 65519, 00:12:54.400 "namespaces": [ 00:12:54.400 { 00:12:54.400 "nsid": 1, 00:12:54.400 "bdev_name": "Malloc2", 00:12:54.400 "name": "Malloc2", 00:12:54.400 "nguid": "F056EDFCB4B149D98839C9334073FE51", 00:12:54.400 "uuid": "f056edfc-b4b1-49d9-8839-c9334073fe51" 00:12:54.400 } 00:12:54.400 ] 00:12:54.400 } 00:12:54.400 ] 00:12:54.400 14:49:37 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:54.400 14:49:37 -- target/nvmf_vfio_user.sh@34 -- # aerpid=989115 00:12:54.400 14:49:37 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:54.400 14:49:37 -- common/autotest_common.sh@1251 -- # local i=0 00:12:54.400 14:49:37 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:54.400 14:49:37 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:54.400 14:49:37 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:54.400 14:49:37 -- common/autotest_common.sh@1262 -- # return 0 00:12:54.400 14:49:37 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:54.400 14:49:37 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:54.660 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.660 [2024-04-26 14:49:37.183437] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:54.660 Malloc3 00:12:54.660 14:49:37 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:54.920 [2024-04-26 14:49:37.353596] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:54.920 14:49:37 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:54.920 Asynchronous Event Request test 00:12:54.920 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:54.920 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:54.920 Registering asynchronous event callbacks... 00:12:54.920 Starting namespace attribute notice tests for all controllers... 00:12:54.920 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:54.920 aer_cb - Changed Namespace 00:12:54.920 Cleaning up... 00:12:54.920 [ 00:12:54.920 { 00:12:54.920 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:54.920 "subtype": "Discovery", 00:12:54.920 "listen_addresses": [], 00:12:54.920 "allow_any_host": true, 00:12:54.920 "hosts": [] 00:12:54.920 }, 00:12:54.920 { 00:12:54.920 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:54.920 "subtype": "NVMe", 00:12:54.920 "listen_addresses": [ 00:12:54.920 { 00:12:54.920 "transport": "VFIOUSER", 00:12:54.920 "trtype": "VFIOUSER", 00:12:54.920 "adrfam": "IPv4", 00:12:54.920 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:54.920 "trsvcid": "0" 00:12:54.920 } 00:12:54.920 ], 00:12:54.920 "allow_any_host": true, 00:12:54.920 "hosts": [], 00:12:54.920 "serial_number": "SPDK1", 00:12:54.921 "model_number": "SPDK bdev Controller", 00:12:54.921 "max_namespaces": 32, 00:12:54.921 "min_cntlid": 1, 00:12:54.921 "max_cntlid": 65519, 00:12:54.921 "namespaces": [ 00:12:54.921 { 00:12:54.921 "nsid": 1, 00:12:54.921 "bdev_name": "Malloc1", 00:12:54.921 "name": "Malloc1", 00:12:54.921 "nguid": "3ABE128468484EFF9F464C011465E264", 00:12:54.921 "uuid": "3abe1284-6848-4eff-9f46-4c011465e264" 00:12:54.921 }, 00:12:54.921 { 00:12:54.921 "nsid": 2, 00:12:54.921 "bdev_name": "Malloc3", 00:12:54.921 "name": "Malloc3", 00:12:54.921 "nguid": "2F8D899F21B74672B6D166710DD215E6", 00:12:54.921 "uuid": "2f8d899f-21b7-4672-b6d1-66710dd215e6" 00:12:54.921 } 00:12:54.921 ] 00:12:54.921 }, 00:12:54.921 { 00:12:54.921 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:54.921 "subtype": "NVMe", 00:12:54.921 "listen_addresses": [ 00:12:54.921 { 00:12:54.921 "transport": "VFIOUSER", 00:12:54.921 "trtype": "VFIOUSER", 00:12:54.921 "adrfam": "IPv4", 00:12:54.921 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:54.921 "trsvcid": "0" 00:12:54.921 } 00:12:54.921 ], 00:12:54.921 "allow_any_host": true, 00:12:54.921 "hosts": [], 00:12:54.921 "serial_number": "SPDK2", 00:12:54.921 "model_number": "SPDK bdev Controller", 00:12:54.921 "max_namespaces": 32, 00:12:54.921 "min_cntlid": 1, 00:12:54.921 "max_cntlid": 65519, 00:12:54.921 "namespaces": [ 00:12:54.921 { 00:12:54.921 "nsid": 1, 00:12:54.921 "bdev_name": "Malloc2", 00:12:54.921 "name": "Malloc2", 00:12:54.921 "nguid": "F056EDFCB4B149D98839C9334073FE51", 00:12:54.921 "uuid": "f056edfc-b4b1-49d9-8839-c9334073fe51" 00:12:54.921 } 00:12:54.921 ] 00:12:54.921 } 00:12:54.921 ] 00:12:54.921 14:49:37 -- target/nvmf_vfio_user.sh@44 -- # wait 989115 00:12:54.921 14:49:37 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:54.921 14:49:37 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:54.921 14:49:37 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:54.921 14:49:37 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:54.921 [2024-04-26 14:49:37.576796] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:12:54.921 [2024-04-26 14:49:37.576876] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid989233 ] 00:12:54.921 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.183 [2024-04-26 14:49:37.610406] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:55.183 [2024-04-26 14:49:37.615627] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:55.183 [2024-04-26 14:49:37.615648] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5b593d7000 00:12:55.183 [2024-04-26 14:49:37.616634] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:55.183 [2024-04-26 14:49:37.617636] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:55.183 [2024-04-26 14:49:37.618650] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:55.183 [2024-04-26 14:49:37.619653] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:55.183 [2024-04-26 14:49:37.620654] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:55.183 [2024-04-26 14:49:37.621664] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:55.183 [2024-04-26 14:49:37.622670] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:55.183 [2024-04-26 14:49:37.623677] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:55.183 [2024-04-26 14:49:37.624686] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:55.183 [2024-04-26 14:49:37.624699] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f5b593cc000 00:12:55.183 [2024-04-26 14:49:37.626123] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:55.183 [2024-04-26 14:49:37.642991] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:55.183 [2024-04-26 14:49:37.643012] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:55.183 [2024-04-26 14:49:37.648086] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:55.183 [2024-04-26 14:49:37.648133] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:55.183 [2024-04-26 14:49:37.648216] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:55.183 [2024-04-26 14:49:37.648231] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:55.183 [2024-04-26 14:49:37.648236] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:55.183 [2024-04-26 14:49:37.649094] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:55.183 [2024-04-26 14:49:37.649103] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:55.183 [2024-04-26 14:49:37.649110] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:55.183 [2024-04-26 14:49:37.650102] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:55.183 [2024-04-26 14:49:37.650110] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:55.183 [2024-04-26 14:49:37.650118] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:55.183 [2024-04-26 14:49:37.651111] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:55.183 [2024-04-26 14:49:37.651119] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:55.183 [2024-04-26 14:49:37.652117] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:55.183 [2024-04-26 14:49:37.652125] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:55.183 [2024-04-26 14:49:37.652129] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:55.183 [2024-04-26 14:49:37.652136] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:55.183 [2024-04-26 14:49:37.652241] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:55.183 [2024-04-26 14:49:37.652246] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:55.184 [2024-04-26 14:49:37.652251] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:55.184 [2024-04-26 14:49:37.653122] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:55.184 [2024-04-26 14:49:37.654125] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:55.184 [2024-04-26 14:49:37.655131] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:55.184 [2024-04-26 14:49:37.656134] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:55.184 [2024-04-26 14:49:37.656176] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:55.184 [2024-04-26 14:49:37.657147] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:55.184 [2024-04-26 14:49:37.657157] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:55.184 [2024-04-26 14:49:37.657162] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:55.184 [2024-04-26 14:49:37.657183] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:55.184 [2024-04-26 14:49:37.657191] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:55.184 [2024-04-26 14:49:37.657204] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:55.184 [2024-04-26 14:49:37.657209] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:55.184 [2024-04-26 14:49:37.657220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:55.184 [2024-04-26 14:49:37.663846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:55.184 [2024-04-26 14:49:37.663857] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:55.184 [2024-04-26 14:49:37.663862] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:55.184 [2024-04-26 14:49:37.663866] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:55.184 [2024-04-26 14:49:37.663871] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:55.184 [2024-04-26 14:49:37.663875] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:55.184 [2024-04-26 14:49:37.663880] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:55.184 [2024-04-26 14:49:37.663884] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:55.184 [2024-04-26 14:49:37.663892] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:55.184 [2024-04-26 14:49:37.663901] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:55.184 [2024-04-26 14:49:37.671842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:55.184 [2024-04-26 14:49:37.671856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.184 [2024-04-26 14:49:37.671865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.184 [2024-04-26 14:49:37.671873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.184 [2024-04-26 14:49:37.671881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.184 [2024-04-26 14:49:37.671886] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:55.184 [2024-04-26 14:49:37.671894] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:55.184 [2024-04-26 14:49:37.671903] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:55.184 [2024-04-26 14:49:37.679843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:55.184 [2024-04-26 14:49:37.679852] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:55.184 [2024-04-26 14:49:37.679857] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:55.184 [2024-04-26 14:49:37.679866] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:55.184 [2024-04-26 14:49:37.679871] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:55.184 [2024-04-26 14:49:37.679880] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:55.184 [2024-04-26 14:49:37.687842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:55.184 [2024-04-26 14:49:37.687892] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:55.184 [2024-04-26 14:49:37.687900] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:55.184 [2024-04-26 14:49:37.687907] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:55.184 [2024-04-26 14:49:37.687912] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:55.184 [2024-04-26 14:49:37.687918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:55.184 [2024-04-26 14:49:37.695841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:55.184 [2024-04-26 14:49:37.695851] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:55.184 [2024-04-26 14:49:37.695863] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:55.184 [2024-04-26 14:49:37.695870] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:55.184 [2024-04-26 14:49:37.695877] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:55.184 [2024-04-26 14:49:37.695882] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:55.184 [2024-04-26 14:49:37.695887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:55.184 [2024-04-26 14:49:37.703842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:55.184 [2024-04-26 14:49:37.703855] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:55.184 [2024-04-26 14:49:37.703862] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:55.184 [2024-04-26 14:49:37.703869] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:55.184 [2024-04-26 14:49:37.703874] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:55.184 [2024-04-26 14:49:37.703880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:55.184 [2024-04-26 14:49:37.711842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:55.184 [2024-04-26 14:49:37.711854] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:55.184 [2024-04-26 14:49:37.711860] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:55.184 [2024-04-26 14:49:37.711868] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:55.184 [2024-04-26 14:49:37.711874] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:55.184 [2024-04-26 14:49:37.711879] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:55.184 [2024-04-26 14:49:37.711884] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:55.184 [2024-04-26 14:49:37.711888] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:55.184 [2024-04-26 14:49:37.711893] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:55.184 [2024-04-26 14:49:37.711908] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:55.184 [2024-04-26 14:49:37.719843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:55.184 [2024-04-26 14:49:37.719856] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:55.184 [2024-04-26 14:49:37.726863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:55.184 [2024-04-26 14:49:37.726875] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:55.184 [2024-04-26 14:49:37.735843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:55.184 [2024-04-26 14:49:37.735856] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:55.184 [2024-04-26 14:49:37.743842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:55.184 [2024-04-26 14:49:37.743854] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:55.184 [2024-04-26 14:49:37.743858] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:55.184 [2024-04-26 14:49:37.743862] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:55.184 [2024-04-26 14:49:37.743865] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:55.184 [2024-04-26 14:49:37.743872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:55.184 [2024-04-26 14:49:37.743879] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:55.184 [2024-04-26 14:49:37.743883] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:55.184 [2024-04-26 14:49:37.743889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:55.184 [2024-04-26 14:49:37.743896] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:55.184 [2024-04-26 14:49:37.743901] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:55.184 [2024-04-26 14:49:37.743907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:55.184 [2024-04-26 14:49:37.743918] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:55.184 [2024-04-26 14:49:37.743923] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:55.184 [2024-04-26 14:49:37.743928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:55.184 [2024-04-26 14:49:37.751843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:55.184 [2024-04-26 14:49:37.751858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:55.184 [2024-04-26 14:49:37.751867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:55.184 [2024-04-26 14:49:37.751874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:55.184 ===================================================== 00:12:55.184 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:55.184 ===================================================== 00:12:55.184 Controller Capabilities/Features 00:12:55.184 ================================ 00:12:55.184 Vendor ID: 4e58 00:12:55.184 Subsystem Vendor ID: 4e58 00:12:55.184 Serial Number: SPDK2 00:12:55.184 Model Number: SPDK bdev Controller 00:12:55.184 Firmware Version: 24.05 00:12:55.184 Recommended Arb Burst: 6 00:12:55.184 IEEE OUI Identifier: 8d 6b 50 00:12:55.184 Multi-path I/O 00:12:55.184 May have multiple subsystem ports: Yes 00:12:55.184 May have multiple controllers: Yes 00:12:55.184 Associated with SR-IOV VF: No 00:12:55.184 Max Data Transfer Size: 131072 00:12:55.184 Max Number of Namespaces: 32 00:12:55.184 Max Number of I/O Queues: 127 00:12:55.184 NVMe Specification Version (VS): 1.3 00:12:55.184 NVMe Specification Version (Identify): 1.3 00:12:55.184 Maximum Queue Entries: 256 00:12:55.184 Contiguous Queues Required: Yes 00:12:55.184 Arbitration Mechanisms Supported 00:12:55.184 Weighted Round Robin: Not Supported 00:12:55.184 Vendor Specific: Not Supported 00:12:55.184 Reset Timeout: 15000 ms 00:12:55.184 Doorbell Stride: 4 bytes 00:12:55.184 NVM Subsystem Reset: Not Supported 00:12:55.184 Command Sets Supported 00:12:55.184 NVM Command Set: Supported 00:12:55.184 Boot Partition: Not Supported 00:12:55.184 Memory Page Size Minimum: 4096 bytes 00:12:55.184 Memory Page Size Maximum: 4096 bytes 00:12:55.184 Persistent Memory Region: Not Supported 00:12:55.184 Optional Asynchronous Events Supported 00:12:55.184 Namespace Attribute Notices: Supported 00:12:55.184 Firmware Activation Notices: Not Supported 00:12:55.184 ANA Change Notices: Not Supported 00:12:55.184 PLE Aggregate Log Change Notices: Not Supported 00:12:55.184 LBA Status Info Alert Notices: Not Supported 00:12:55.184 EGE Aggregate Log Change Notices: Not Supported 00:12:55.184 Normal NVM Subsystem Shutdown event: Not Supported 00:12:55.184 Zone Descriptor Change Notices: Not Supported 00:12:55.184 Discovery Log Change Notices: Not Supported 00:12:55.184 Controller Attributes 00:12:55.184 128-bit Host Identifier: Supported 00:12:55.184 Non-Operational Permissive Mode: Not Supported 00:12:55.184 NVM Sets: Not Supported 00:12:55.184 Read Recovery Levels: Not Supported 00:12:55.184 Endurance Groups: Not Supported 00:12:55.184 Predictable Latency Mode: Not Supported 00:12:55.184 Traffic Based Keep ALive: Not Supported 00:12:55.184 Namespace Granularity: Not Supported 00:12:55.184 SQ Associations: Not Supported 00:12:55.184 UUID List: Not Supported 00:12:55.184 Multi-Domain Subsystem: Not Supported 00:12:55.184 Fixed Capacity Management: Not Supported 00:12:55.184 Variable Capacity Management: Not Supported 00:12:55.184 Delete Endurance Group: Not Supported 00:12:55.184 Delete NVM Set: Not Supported 00:12:55.184 Extended LBA Formats Supported: Not Supported 00:12:55.184 Flexible Data Placement Supported: Not Supported 00:12:55.184 00:12:55.184 Controller Memory Buffer Support 00:12:55.184 ================================ 00:12:55.184 Supported: No 00:12:55.184 00:12:55.184 Persistent Memory Region Support 00:12:55.184 ================================ 00:12:55.184 Supported: No 00:12:55.184 00:12:55.184 Admin Command Set Attributes 00:12:55.184 ============================ 00:12:55.184 Security Send/Receive: Not Supported 00:12:55.184 Format NVM: Not Supported 00:12:55.184 Firmware Activate/Download: Not Supported 00:12:55.184 Namespace Management: Not Supported 00:12:55.184 Device Self-Test: Not Supported 00:12:55.184 Directives: Not Supported 00:12:55.184 NVMe-MI: Not Supported 00:12:55.184 Virtualization Management: Not Supported 00:12:55.185 Doorbell Buffer Config: Not Supported 00:12:55.185 Get LBA Status Capability: Not Supported 00:12:55.185 Command & Feature Lockdown Capability: Not Supported 00:12:55.185 Abort Command Limit: 4 00:12:55.185 Async Event Request Limit: 4 00:12:55.185 Number of Firmware Slots: N/A 00:12:55.185 Firmware Slot 1 Read-Only: N/A 00:12:55.185 Firmware Activation Without Reset: N/A 00:12:55.185 Multiple Update Detection Support: N/A 00:12:55.185 Firmware Update Granularity: No Information Provided 00:12:55.185 Per-Namespace SMART Log: No 00:12:55.185 Asymmetric Namespace Access Log Page: Not Supported 00:12:55.185 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:55.185 Command Effects Log Page: Supported 00:12:55.185 Get Log Page Extended Data: Supported 00:12:55.185 Telemetry Log Pages: Not Supported 00:12:55.185 Persistent Event Log Pages: Not Supported 00:12:55.185 Supported Log Pages Log Page: May Support 00:12:55.185 Commands Supported & Effects Log Page: Not Supported 00:12:55.185 Feature Identifiers & Effects Log Page:May Support 00:12:55.185 NVMe-MI Commands & Effects Log Page: May Support 00:12:55.185 Data Area 4 for Telemetry Log: Not Supported 00:12:55.185 Error Log Page Entries Supported: 128 00:12:55.185 Keep Alive: Supported 00:12:55.185 Keep Alive Granularity: 10000 ms 00:12:55.185 00:12:55.185 NVM Command Set Attributes 00:12:55.185 ========================== 00:12:55.185 Submission Queue Entry Size 00:12:55.185 Max: 64 00:12:55.185 Min: 64 00:12:55.185 Completion Queue Entry Size 00:12:55.185 Max: 16 00:12:55.185 Min: 16 00:12:55.185 Number of Namespaces: 32 00:12:55.185 Compare Command: Supported 00:12:55.185 Write Uncorrectable Command: Not Supported 00:12:55.185 Dataset Management Command: Supported 00:12:55.185 Write Zeroes Command: Supported 00:12:55.185 Set Features Save Field: Not Supported 00:12:55.185 Reservations: Not Supported 00:12:55.185 Timestamp: Not Supported 00:12:55.185 Copy: Supported 00:12:55.185 Volatile Write Cache: Present 00:12:55.185 Atomic Write Unit (Normal): 1 00:12:55.185 Atomic Write Unit (PFail): 1 00:12:55.185 Atomic Compare & Write Unit: 1 00:12:55.185 Fused Compare & Write: Supported 00:12:55.185 Scatter-Gather List 00:12:55.185 SGL Command Set: Supported (Dword aligned) 00:12:55.185 SGL Keyed: Not Supported 00:12:55.185 SGL Bit Bucket Descriptor: Not Supported 00:12:55.185 SGL Metadata Pointer: Not Supported 00:12:55.185 Oversized SGL: Not Supported 00:12:55.185 SGL Metadata Address: Not Supported 00:12:55.185 SGL Offset: Not Supported 00:12:55.185 Transport SGL Data Block: Not Supported 00:12:55.185 Replay Protected Memory Block: Not Supported 00:12:55.185 00:12:55.185 Firmware Slot Information 00:12:55.185 ========================= 00:12:55.185 Active slot: 1 00:12:55.185 Slot 1 Firmware Revision: 24.05 00:12:55.185 00:12:55.185 00:12:55.185 Commands Supported and Effects 00:12:55.185 ============================== 00:12:55.185 Admin Commands 00:12:55.185 -------------- 00:12:55.185 Get Log Page (02h): Supported 00:12:55.185 Identify (06h): Supported 00:12:55.185 Abort (08h): Supported 00:12:55.185 Set Features (09h): Supported 00:12:55.185 Get Features (0Ah): Supported 00:12:55.185 Asynchronous Event Request (0Ch): Supported 00:12:55.185 Keep Alive (18h): Supported 00:12:55.185 I/O Commands 00:12:55.185 ------------ 00:12:55.185 Flush (00h): Supported LBA-Change 00:12:55.185 Write (01h): Supported LBA-Change 00:12:55.185 Read (02h): Supported 00:12:55.185 Compare (05h): Supported 00:12:55.185 Write Zeroes (08h): Supported LBA-Change 00:12:55.185 Dataset Management (09h): Supported LBA-Change 00:12:55.185 Copy (19h): Supported LBA-Change 00:12:55.185 Unknown (79h): Supported LBA-Change 00:12:55.185 Unknown (7Ah): Supported 00:12:55.185 00:12:55.185 Error Log 00:12:55.185 ========= 00:12:55.185 00:12:55.185 Arbitration 00:12:55.185 =========== 00:12:55.185 Arbitration Burst: 1 00:12:55.185 00:12:55.185 Power Management 00:12:55.185 ================ 00:12:55.185 Number of Power States: 1 00:12:55.185 Current Power State: Power State #0 00:12:55.185 Power State #0: 00:12:55.185 Max Power: 0.00 W 00:12:55.185 Non-Operational State: Operational 00:12:55.185 Entry Latency: Not Reported 00:12:55.185 Exit Latency: Not Reported 00:12:55.185 Relative Read Throughput: 0 00:12:55.185 Relative Read Latency: 0 00:12:55.185 Relative Write Throughput: 0 00:12:55.185 Relative Write Latency: 0 00:12:55.185 Idle Power: Not Reported 00:12:55.185 Active Power: Not Reported 00:12:55.185 Non-Operational Permissive Mode: Not Supported 00:12:55.185 00:12:55.185 Health Information 00:12:55.185 ================== 00:12:55.185 Critical Warnings: 00:12:55.185 Available Spare Space: OK 00:12:55.185 Temperature: OK 00:12:55.185 Device Reliability: OK 00:12:55.185 Read Only: No 00:12:55.185 Volatile Memory Backup: OK 00:12:55.185 Current Temperature: 0 Kelvin (-2[2024-04-26 14:49:37.751976] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:55.185 [2024-04-26 14:49:37.759843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:55.185 [2024-04-26 14:49:37.759879] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:55.185 [2024-04-26 14:49:37.759888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.185 [2024-04-26 14:49:37.759894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.185 [2024-04-26 14:49:37.759900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.185 [2024-04-26 14:49:37.759906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.185 [2024-04-26 14:49:37.759953] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:55.185 [2024-04-26 14:49:37.759963] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:55.185 [2024-04-26 14:49:37.760958] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:55.185 [2024-04-26 14:49:37.761005] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:55.185 [2024-04-26 14:49:37.761012] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:55.185 [2024-04-26 14:49:37.761967] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:55.185 [2024-04-26 14:49:37.761979] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:55.185 [2024-04-26 14:49:37.762026] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:55.185 [2024-04-26 14:49:37.764843] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:55.185 73 Celsius) 00:12:55.185 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:55.185 Available Spare: 0% 00:12:55.185 Available Spare Threshold: 0% 00:12:55.185 Life Percentage Used: 0% 00:12:55.185 Data Units Read: 0 00:12:55.185 Data Units Written: 0 00:12:55.185 Host Read Commands: 0 00:12:55.185 Host Write Commands: 0 00:12:55.185 Controller Busy Time: 0 minutes 00:12:55.185 Power Cycles: 0 00:12:55.185 Power On Hours: 0 hours 00:12:55.185 Unsafe Shutdowns: 0 00:12:55.185 Unrecoverable Media Errors: 0 00:12:55.185 Lifetime Error Log Entries: 0 00:12:55.185 Warning Temperature Time: 0 minutes 00:12:55.185 Critical Temperature Time: 0 minutes 00:12:55.185 00:12:55.185 Number of Queues 00:12:55.185 ================ 00:12:55.185 Number of I/O Submission Queues: 127 00:12:55.185 Number of I/O Completion Queues: 127 00:12:55.185 00:12:55.185 Active Namespaces 00:12:55.185 ================= 00:12:55.185 Namespace ID:1 00:12:55.185 Error Recovery Timeout: Unlimited 00:12:55.185 Command Set Identifier: NVM (00h) 00:12:55.185 Deallocate: Supported 00:12:55.185 Deallocated/Unwritten Error: Not Supported 00:12:55.185 Deallocated Read Value: Unknown 00:12:55.185 Deallocate in Write Zeroes: Not Supported 00:12:55.185 Deallocated Guard Field: 0xFFFF 00:12:55.185 Flush: Supported 00:12:55.185 Reservation: Supported 00:12:55.185 Namespace Sharing Capabilities: Multiple Controllers 00:12:55.185 Size (in LBAs): 131072 (0GiB) 00:12:55.185 Capacity (in LBAs): 131072 (0GiB) 00:12:55.185 Utilization (in LBAs): 131072 (0GiB) 00:12:55.185 NGUID: F056EDFCB4B149D98839C9334073FE51 00:12:55.185 UUID: f056edfc-b4b1-49d9-8839-c9334073fe51 00:12:55.185 Thin Provisioning: Not Supported 00:12:55.185 Per-NS Atomic Units: Yes 00:12:55.185 Atomic Boundary Size (Normal): 0 00:12:55.185 Atomic Boundary Size (PFail): 0 00:12:55.185 Atomic Boundary Offset: 0 00:12:55.185 Maximum Single Source Range Length: 65535 00:12:55.185 Maximum Copy Length: 65535 00:12:55.185 Maximum Source Range Count: 1 00:12:55.185 NGUID/EUI64 Never Reused: No 00:12:55.185 Namespace Write Protected: No 00:12:55.185 Number of LBA Formats: 1 00:12:55.185 Current LBA Format: LBA Format #00 00:12:55.185 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:55.185 00:12:55.185 14:49:37 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:55.185 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.445 [2024-04-26 14:49:37.948842] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:00.741 [2024-04-26 14:49:43.055012] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:00.741 Initializing NVMe Controllers 00:13:00.741 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:00.741 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:00.741 Initialization complete. Launching workers. 00:13:00.741 ======================================================== 00:13:00.741 Latency(us) 00:13:00.741 Device Information : IOPS MiB/s Average min max 00:13:00.741 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40037.43 156.40 3196.88 851.68 6863.31 00:13:00.741 ======================================================== 00:13:00.741 Total : 40037.43 156.40 3196.88 851.68 6863.31 00:13:00.741 00:13:00.741 14:49:43 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:00.741 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.741 [2024-04-26 14:49:43.226561] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:06.085 [2024-04-26 14:49:48.249749] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:06.085 Initializing NVMe Controllers 00:13:06.085 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:06.085 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:06.085 Initialization complete. Launching workers. 00:13:06.085 ======================================================== 00:13:06.085 Latency(us) 00:13:06.085 Device Information : IOPS MiB/s Average min max 00:13:06.085 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35348.66 138.08 3620.92 1133.86 7455.44 00:13:06.085 ======================================================== 00:13:06.085 Total : 35348.66 138.08 3620.92 1133.86 7455.44 00:13:06.085 00:13:06.085 14:49:48 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:06.085 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.085 [2024-04-26 14:49:48.438888] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:11.366 [2024-04-26 14:49:53.582922] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:11.366 Initializing NVMe Controllers 00:13:11.366 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:11.366 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:11.366 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:11.366 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:11.366 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:11.366 Initialization complete. Launching workers. 00:13:11.366 Starting thread on core 2 00:13:11.366 Starting thread on core 3 00:13:11.366 Starting thread on core 1 00:13:11.366 14:49:53 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:11.366 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.366 [2024-04-26 14:49:53.839301] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:14.663 [2024-04-26 14:49:56.887576] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:14.663 Initializing NVMe Controllers 00:13:14.663 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:14.663 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:14.663 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:14.663 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:14.663 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:14.663 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:14.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:14.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:14.663 Initialization complete. Launching workers. 00:13:14.663 Starting thread on core 1 with urgent priority queue 00:13:14.663 Starting thread on core 2 with urgent priority queue 00:13:14.663 Starting thread on core 3 with urgent priority queue 00:13:14.663 Starting thread on core 0 with urgent priority queue 00:13:14.663 SPDK bdev Controller (SPDK2 ) core 0: 8606.00 IO/s 11.62 secs/100000 ios 00:13:14.663 SPDK bdev Controller (SPDK2 ) core 1: 15481.67 IO/s 6.46 secs/100000 ios 00:13:14.663 SPDK bdev Controller (SPDK2 ) core 2: 8062.67 IO/s 12.40 secs/100000 ios 00:13:14.663 SPDK bdev Controller (SPDK2 ) core 3: 9121.00 IO/s 10.96 secs/100000 ios 00:13:14.663 ======================================================== 00:13:14.663 00:13:14.663 14:49:56 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:14.663 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.663 [2024-04-26 14:49:57.157236] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:14.663 [2024-04-26 14:49:57.170330] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:14.663 Initializing NVMe Controllers 00:13:14.663 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:14.663 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:14.663 Namespace ID: 1 size: 0GB 00:13:14.663 Initialization complete. 00:13:14.663 INFO: using host memory buffer for IO 00:13:14.663 Hello world! 00:13:14.663 14:49:57 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:14.663 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.924 [2024-04-26 14:49:57.428088] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:15.865 Initializing NVMe Controllers 00:13:15.865 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:15.865 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:15.865 Initialization complete. Launching workers. 00:13:15.865 submit (in ns) avg, min, max = 9226.7, 3868.3, 4001908.3 00:13:15.865 complete (in ns) avg, min, max = 17798.0, 2358.3, 4032229.2 00:13:15.865 00:13:15.865 Submit histogram 00:13:15.865 ================ 00:13:15.865 Range in us Cumulative Count 00:13:15.865 3.867 - 3.893: 0.9415% ( 181) 00:13:15.865 3.893 - 3.920: 5.9821% ( 969) 00:13:15.865 3.920 - 3.947: 14.0345% ( 1548) 00:13:15.865 3.947 - 3.973: 24.2665% ( 1967) 00:13:15.865 3.973 - 4.000: 36.4076% ( 2334) 00:13:15.865 4.000 - 4.027: 50.4474% ( 2699) 00:13:15.865 4.027 - 4.053: 66.4482% ( 3076) 00:13:15.865 4.053 - 4.080: 80.6284% ( 2726) 00:13:15.865 4.080 - 4.107: 91.1309% ( 2019) 00:13:15.865 4.107 - 4.133: 96.3483% ( 1003) 00:13:15.865 4.133 - 4.160: 98.4915% ( 412) 00:13:15.865 4.160 - 4.187: 99.1573% ( 128) 00:13:15.865 4.187 - 4.213: 99.3654% ( 40) 00:13:15.865 4.213 - 4.240: 99.4122% ( 9) 00:13:15.865 4.240 - 4.267: 99.4486% ( 7) 00:13:15.865 4.267 - 4.293: 99.4746% ( 5) 00:13:15.865 4.293 - 4.320: 99.4798% ( 1) 00:13:15.865 4.320 - 4.347: 99.4902% ( 2) 00:13:15.865 4.347 - 4.373: 99.4954% ( 1) 00:13:15.865 4.373 - 4.400: 99.5006% ( 1) 00:13:15.865 4.400 - 4.427: 99.5110% ( 2) 00:13:15.865 4.533 - 4.560: 99.5162% ( 1) 00:13:15.865 4.747 - 4.773: 99.5214% ( 1) 00:13:15.865 4.800 - 4.827: 99.5318% ( 2) 00:13:15.865 4.853 - 4.880: 99.5370% ( 1) 00:13:15.865 4.933 - 4.960: 99.5422% ( 1) 00:13:15.865 5.040 - 5.067: 99.5526% ( 2) 00:13:15.865 5.147 - 5.173: 99.5578% ( 1) 00:13:15.865 5.760 - 5.787: 99.5630% ( 1) 00:13:15.865 5.787 - 5.813: 99.5682% ( 1) 00:13:15.865 5.867 - 5.893: 99.5734% ( 1) 00:13:15.865 5.920 - 5.947: 99.5839% ( 2) 00:13:15.865 5.947 - 5.973: 99.5891% ( 1) 00:13:15.865 6.000 - 6.027: 99.5943% ( 1) 00:13:15.865 6.027 - 6.053: 99.6099% ( 3) 00:13:15.865 6.053 - 6.080: 99.6151% ( 1) 00:13:15.865 6.107 - 6.133: 99.6255% ( 2) 00:13:15.865 6.133 - 6.160: 99.6463% ( 4) 00:13:15.865 6.187 - 6.213: 99.6515% ( 1) 00:13:15.865 6.213 - 6.240: 99.6567% ( 1) 00:13:15.865 6.240 - 6.267: 99.6671% ( 2) 00:13:15.865 6.293 - 6.320: 99.6775% ( 2) 00:13:15.865 6.347 - 6.373: 99.6827% ( 1) 00:13:15.865 6.373 - 6.400: 99.6931% ( 2) 00:13:15.865 6.427 - 6.453: 99.7035% ( 2) 00:13:15.865 6.480 - 6.507: 99.7087% ( 1) 00:13:15.865 6.507 - 6.533: 99.7243% ( 3) 00:13:15.865 6.533 - 6.560: 99.7295% ( 1) 00:13:15.865 6.560 - 6.587: 99.7399% ( 2) 00:13:15.865 6.640 - 6.667: 99.7451% ( 1) 00:13:15.865 6.773 - 6.800: 99.7503% ( 1) 00:13:15.865 6.800 - 6.827: 99.7555% ( 1) 00:13:15.865 6.827 - 6.880: 99.7607% ( 1) 00:13:15.865 6.880 - 6.933: 99.7659% ( 1) 00:13:15.865 6.933 - 6.987: 99.7711% ( 1) 00:13:15.865 6.987 - 7.040: 99.7763% ( 1) 00:13:15.865 7.093 - 7.147: 99.7867% ( 2) 00:13:15.865 7.147 - 7.200: 99.7919% ( 1) 00:13:15.865 7.200 - 7.253: 99.8075% ( 3) 00:13:15.865 7.307 - 7.360: 99.8127% ( 1) 00:13:15.865 7.360 - 7.413: 99.8179% ( 1) 00:13:15.865 7.467 - 7.520: 99.8231% ( 1) 00:13:15.865 7.520 - 7.573: 99.8283% ( 1) 00:13:15.865 7.680 - 7.733: 99.8335% ( 1) 00:13:15.865 7.733 - 7.787: 99.8387% ( 1) 00:13:15.865 8.160 - 8.213: 99.8439% ( 1) 00:13:15.865 8.320 - 8.373: 99.8491% ( 1) 00:13:15.865 8.907 - 8.960: 99.8543% ( 1) 00:13:15.865 9.173 - 9.227: 99.8596% ( 1) 00:13:15.865 11.787 - 11.840: 99.8648% ( 1) 00:13:15.865 13.227 - 13.280: 99.8700% ( 1) 00:13:15.865 3986.773 - 4014.080: 100.0000% ( 25) 00:13:15.865 00:13:15.865 Complete histogram 00:13:15.865 ================== 00:13:15.865 Range in us Cumulative Count 00:13:15.865 2.347 - 2.360: 0.0052% ( 1) 00:13:15.865 2.360 - 2.373: 0.0520% ( 9) 00:13:15.865 2.373 - 2.387: 1.1132% ( 204) 00:13:15.865 2.387 - 2.400: 1.2172% ( 20) 00:13:15.865 2.400 - 2.413: 1.9715% ( 145) 00:13:15.865 2.413 - [2024-04-26 14:49:58.523485] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:16.127 2.427: 57.8444% ( 10741) 00:13:16.127 2.427 - 2.440: 63.3271% ( 1054) 00:13:16.127 2.440 - 2.453: 76.4669% ( 2526) 00:13:16.127 2.453 - 2.467: 80.7532% ( 824) 00:13:16.127 2.467 - 2.480: 82.4854% ( 333) 00:13:16.127 2.480 - 2.493: 86.1111% ( 697) 00:13:16.127 2.493 - 2.507: 91.9007% ( 1113) 00:13:16.127 2.507 - 2.520: 95.7085% ( 732) 00:13:16.127 2.520 - 2.533: 97.6436% ( 372) 00:13:16.127 2.533 - 2.547: 98.8296% ( 228) 00:13:16.127 2.547 - 2.560: 99.2769% ( 86) 00:13:16.127 2.560 - 2.573: 99.4070% ( 25) 00:13:16.127 2.573 - 2.587: 99.4174% ( 2) 00:13:16.127 2.587 - 2.600: 99.4330% ( 3) 00:13:16.127 2.627 - 2.640: 99.4382% ( 1) 00:13:16.127 2.667 - 2.680: 99.4434% ( 1) 00:13:16.127 4.267 - 4.293: 99.4486% ( 1) 00:13:16.127 4.320 - 4.347: 99.4538% ( 1) 00:13:16.127 4.373 - 4.400: 99.4642% ( 2) 00:13:16.127 4.453 - 4.480: 99.4746% ( 2) 00:13:16.127 4.480 - 4.507: 99.4798% ( 1) 00:13:16.127 4.533 - 4.560: 99.4850% ( 1) 00:13:16.127 4.560 - 4.587: 99.4902% ( 1) 00:13:16.127 4.640 - 4.667: 99.4954% ( 1) 00:13:16.127 4.667 - 4.693: 99.5058% ( 2) 00:13:16.127 4.720 - 4.747: 99.5162% ( 2) 00:13:16.127 4.800 - 4.827: 99.5214% ( 1) 00:13:16.127 4.827 - 4.853: 99.5266% ( 1) 00:13:16.127 4.880 - 4.907: 99.5370% ( 2) 00:13:16.127 4.987 - 5.013: 99.5422% ( 1) 00:13:16.127 5.013 - 5.040: 99.5474% ( 1) 00:13:16.127 5.040 - 5.067: 99.5526% ( 1) 00:13:16.127 5.067 - 5.093: 99.5578% ( 1) 00:13:16.127 5.147 - 5.173: 99.5630% ( 1) 00:13:16.127 5.200 - 5.227: 99.5682% ( 1) 00:13:16.127 5.307 - 5.333: 99.5787% ( 2) 00:13:16.127 5.333 - 5.360: 99.5839% ( 1) 00:13:16.127 5.440 - 5.467: 99.5891% ( 1) 00:13:16.127 5.520 - 5.547: 99.5943% ( 1) 00:13:16.127 6.533 - 6.560: 99.5995% ( 1) 00:13:16.127 9.973 - 10.027: 99.6047% ( 1) 00:13:16.127 13.333 - 13.387: 99.6099% ( 1) 00:13:16.127 13.493 - 13.547: 99.6151% ( 1) 00:13:16.127 3495.253 - 3522.560: 99.6203% ( 1) 00:13:16.127 3986.773 - 4014.080: 99.9948% ( 72) 00:13:16.127 4014.080 - 4041.387: 100.0000% ( 1) 00:13:16.127 00:13:16.127 14:49:58 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:16.127 14:49:58 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:16.127 14:49:58 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:16.127 14:49:58 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:16.127 14:49:58 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:16.127 [ 00:13:16.127 { 00:13:16.127 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:16.127 "subtype": "Discovery", 00:13:16.127 "listen_addresses": [], 00:13:16.127 "allow_any_host": true, 00:13:16.127 "hosts": [] 00:13:16.127 }, 00:13:16.127 { 00:13:16.127 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:16.127 "subtype": "NVMe", 00:13:16.127 "listen_addresses": [ 00:13:16.127 { 00:13:16.127 "transport": "VFIOUSER", 00:13:16.127 "trtype": "VFIOUSER", 00:13:16.127 "adrfam": "IPv4", 00:13:16.127 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:16.127 "trsvcid": "0" 00:13:16.127 } 00:13:16.127 ], 00:13:16.127 "allow_any_host": true, 00:13:16.127 "hosts": [], 00:13:16.127 "serial_number": "SPDK1", 00:13:16.127 "model_number": "SPDK bdev Controller", 00:13:16.127 "max_namespaces": 32, 00:13:16.127 "min_cntlid": 1, 00:13:16.127 "max_cntlid": 65519, 00:13:16.127 "namespaces": [ 00:13:16.127 { 00:13:16.127 "nsid": 1, 00:13:16.127 "bdev_name": "Malloc1", 00:13:16.127 "name": "Malloc1", 00:13:16.127 "nguid": "3ABE128468484EFF9F464C011465E264", 00:13:16.127 "uuid": "3abe1284-6848-4eff-9f46-4c011465e264" 00:13:16.127 }, 00:13:16.127 { 00:13:16.127 "nsid": 2, 00:13:16.127 "bdev_name": "Malloc3", 00:13:16.127 "name": "Malloc3", 00:13:16.127 "nguid": "2F8D899F21B74672B6D166710DD215E6", 00:13:16.127 "uuid": "2f8d899f-21b7-4672-b6d1-66710dd215e6" 00:13:16.127 } 00:13:16.127 ] 00:13:16.127 }, 00:13:16.127 { 00:13:16.127 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:16.127 "subtype": "NVMe", 00:13:16.127 "listen_addresses": [ 00:13:16.127 { 00:13:16.127 "transport": "VFIOUSER", 00:13:16.127 "trtype": "VFIOUSER", 00:13:16.127 "adrfam": "IPv4", 00:13:16.127 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:16.127 "trsvcid": "0" 00:13:16.127 } 00:13:16.127 ], 00:13:16.127 "allow_any_host": true, 00:13:16.127 "hosts": [], 00:13:16.127 "serial_number": "SPDK2", 00:13:16.127 "model_number": "SPDK bdev Controller", 00:13:16.127 "max_namespaces": 32, 00:13:16.127 "min_cntlid": 1, 00:13:16.127 "max_cntlid": 65519, 00:13:16.127 "namespaces": [ 00:13:16.127 { 00:13:16.127 "nsid": 1, 00:13:16.127 "bdev_name": "Malloc2", 00:13:16.127 "name": "Malloc2", 00:13:16.127 "nguid": "F056EDFCB4B149D98839C9334073FE51", 00:13:16.127 "uuid": "f056edfc-b4b1-49d9-8839-c9334073fe51" 00:13:16.127 } 00:13:16.127 ] 00:13:16.127 } 00:13:16.127 ] 00:13:16.127 14:49:58 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:16.127 14:49:58 -- target/nvmf_vfio_user.sh@34 -- # aerpid=993276 00:13:16.127 14:49:58 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:16.127 14:49:58 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:16.127 14:49:58 -- common/autotest_common.sh@1251 -- # local i=0 00:13:16.127 14:49:58 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:16.127 14:49:58 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:16.127 14:49:58 -- common/autotest_common.sh@1262 -- # return 0 00:13:16.127 14:49:58 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:16.127 14:49:58 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:16.127 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.387 [2024-04-26 14:49:58.899265] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:16.387 Malloc4 00:13:16.387 14:49:58 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:16.647 [2024-04-26 14:49:59.068362] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:16.647 14:49:59 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:16.647 Asynchronous Event Request test 00:13:16.647 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:16.647 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:16.647 Registering asynchronous event callbacks... 00:13:16.647 Starting namespace attribute notice tests for all controllers... 00:13:16.647 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:16.647 aer_cb - Changed Namespace 00:13:16.647 Cleaning up... 00:13:16.647 [ 00:13:16.647 { 00:13:16.647 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:16.647 "subtype": "Discovery", 00:13:16.647 "listen_addresses": [], 00:13:16.647 "allow_any_host": true, 00:13:16.647 "hosts": [] 00:13:16.647 }, 00:13:16.647 { 00:13:16.647 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:16.647 "subtype": "NVMe", 00:13:16.647 "listen_addresses": [ 00:13:16.647 { 00:13:16.647 "transport": "VFIOUSER", 00:13:16.647 "trtype": "VFIOUSER", 00:13:16.647 "adrfam": "IPv4", 00:13:16.647 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:16.647 "trsvcid": "0" 00:13:16.647 } 00:13:16.647 ], 00:13:16.647 "allow_any_host": true, 00:13:16.647 "hosts": [], 00:13:16.647 "serial_number": "SPDK1", 00:13:16.647 "model_number": "SPDK bdev Controller", 00:13:16.647 "max_namespaces": 32, 00:13:16.647 "min_cntlid": 1, 00:13:16.647 "max_cntlid": 65519, 00:13:16.647 "namespaces": [ 00:13:16.647 { 00:13:16.647 "nsid": 1, 00:13:16.647 "bdev_name": "Malloc1", 00:13:16.647 "name": "Malloc1", 00:13:16.647 "nguid": "3ABE128468484EFF9F464C011465E264", 00:13:16.647 "uuid": "3abe1284-6848-4eff-9f46-4c011465e264" 00:13:16.647 }, 00:13:16.647 { 00:13:16.647 "nsid": 2, 00:13:16.647 "bdev_name": "Malloc3", 00:13:16.647 "name": "Malloc3", 00:13:16.647 "nguid": "2F8D899F21B74672B6D166710DD215E6", 00:13:16.647 "uuid": "2f8d899f-21b7-4672-b6d1-66710dd215e6" 00:13:16.647 } 00:13:16.647 ] 00:13:16.647 }, 00:13:16.647 { 00:13:16.647 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:16.647 "subtype": "NVMe", 00:13:16.647 "listen_addresses": [ 00:13:16.647 { 00:13:16.647 "transport": "VFIOUSER", 00:13:16.647 "trtype": "VFIOUSER", 00:13:16.647 "adrfam": "IPv4", 00:13:16.647 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:16.647 "trsvcid": "0" 00:13:16.647 } 00:13:16.647 ], 00:13:16.647 "allow_any_host": true, 00:13:16.647 "hosts": [], 00:13:16.647 "serial_number": "SPDK2", 00:13:16.647 "model_number": "SPDK bdev Controller", 00:13:16.647 "max_namespaces": 32, 00:13:16.648 "min_cntlid": 1, 00:13:16.648 "max_cntlid": 65519, 00:13:16.648 "namespaces": [ 00:13:16.648 { 00:13:16.648 "nsid": 1, 00:13:16.648 "bdev_name": "Malloc2", 00:13:16.648 "name": "Malloc2", 00:13:16.648 "nguid": "F056EDFCB4B149D98839C9334073FE51", 00:13:16.648 "uuid": "f056edfc-b4b1-49d9-8839-c9334073fe51" 00:13:16.648 }, 00:13:16.648 { 00:13:16.648 "nsid": 2, 00:13:16.648 "bdev_name": "Malloc4", 00:13:16.648 "name": "Malloc4", 00:13:16.648 "nguid": "014FA45C6ED8404CBA1B5E79A7ABFE45", 00:13:16.648 "uuid": "014fa45c-6ed8-404c-ba1b-5e79a7abfe45" 00:13:16.648 } 00:13:16.648 ] 00:13:16.648 } 00:13:16.648 ] 00:13:16.648 14:49:59 -- target/nvmf_vfio_user.sh@44 -- # wait 993276 00:13:16.648 14:49:59 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:16.648 14:49:59 -- target/nvmf_vfio_user.sh@95 -- # killprocess 984173 00:13:16.648 14:49:59 -- common/autotest_common.sh@936 -- # '[' -z 984173 ']' 00:13:16.648 14:49:59 -- common/autotest_common.sh@940 -- # kill -0 984173 00:13:16.648 14:49:59 -- common/autotest_common.sh@941 -- # uname 00:13:16.648 14:49:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:16.648 14:49:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 984173 00:13:16.924 14:49:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:16.924 14:49:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:16.924 14:49:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 984173' 00:13:16.924 killing process with pid 984173 00:13:16.924 14:49:59 -- common/autotest_common.sh@955 -- # kill 984173 00:13:16.924 [2024-04-26 14:49:59.318203] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:13:16.924 14:49:59 -- common/autotest_common.sh@960 -- # wait 984173 00:13:16.924 14:49:59 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:16.924 14:49:59 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:16.924 14:49:59 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:16.924 14:49:59 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:16.924 14:49:59 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:16.924 14:49:59 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=993563 00:13:16.924 14:49:59 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 993563' 00:13:16.924 Process pid: 993563 00:13:16.924 14:49:59 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:16.924 14:49:59 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:16.924 14:49:59 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 993563 00:13:16.924 14:49:59 -- common/autotest_common.sh@817 -- # '[' -z 993563 ']' 00:13:16.924 14:49:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.924 14:49:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:16.924 14:49:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.924 14:49:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:16.924 14:49:59 -- common/autotest_common.sh@10 -- # set +x 00:13:16.924 [2024-04-26 14:49:59.544882] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:16.924 [2024-04-26 14:49:59.545823] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:16.924 [2024-04-26 14:49:59.545873] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.924 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.184 [2024-04-26 14:49:59.607516] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:17.184 [2024-04-26 14:49:59.671662] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.184 [2024-04-26 14:49:59.671706] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.184 [2024-04-26 14:49:59.671715] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.184 [2024-04-26 14:49:59.671724] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.184 [2024-04-26 14:49:59.671731] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.184 [2024-04-26 14:49:59.671898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.184 [2024-04-26 14:49:59.672113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.184 [2024-04-26 14:49:59.672114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:17.184 [2024-04-26 14:49:59.671949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.184 [2024-04-26 14:49:59.732640] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:13:17.184 [2024-04-26 14:49:59.732642] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:13:17.184 [2024-04-26 14:49:59.732941] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:13:17.184 [2024-04-26 14:49:59.733119] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:17.184 [2024-04-26 14:49:59.733210] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:13:17.755 14:50:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:17.755 14:50:00 -- common/autotest_common.sh@850 -- # return 0 00:13:17.755 14:50:00 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:18.696 14:50:01 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:18.955 14:50:01 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:18.955 14:50:01 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:18.955 14:50:01 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:18.955 14:50:01 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:18.955 14:50:01 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:19.215 Malloc1 00:13:19.215 14:50:01 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:19.215 14:50:01 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:19.474 14:50:02 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:19.735 14:50:02 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:19.735 14:50:02 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:19.735 14:50:02 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:19.735 Malloc2 00:13:19.735 14:50:02 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:19.994 14:50:02 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:20.255 14:50:02 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:20.255 14:50:02 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:20.255 14:50:02 -- target/nvmf_vfio_user.sh@95 -- # killprocess 993563 00:13:20.255 14:50:02 -- common/autotest_common.sh@936 -- # '[' -z 993563 ']' 00:13:20.255 14:50:02 -- common/autotest_common.sh@940 -- # kill -0 993563 00:13:20.255 14:50:02 -- common/autotest_common.sh@941 -- # uname 00:13:20.255 14:50:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:20.255 14:50:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 993563 00:13:20.255 14:50:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:20.255 14:50:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:20.255 14:50:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 993563' 00:13:20.255 killing process with pid 993563 00:13:20.255 14:50:02 -- common/autotest_common.sh@955 -- # kill 993563 00:13:20.255 14:50:02 -- common/autotest_common.sh@960 -- # wait 993563 00:13:20.515 14:50:03 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:20.515 14:50:03 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:20.515 00:13:20.515 real 0m51.210s 00:13:20.515 user 3m23.092s 00:13:20.515 sys 0m2.954s 00:13:20.515 14:50:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:20.515 14:50:03 -- common/autotest_common.sh@10 -- # set +x 00:13:20.515 ************************************ 00:13:20.515 END TEST nvmf_vfio_user 00:13:20.515 ************************************ 00:13:20.515 14:50:03 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:20.515 14:50:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:20.515 14:50:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:20.515 14:50:03 -- common/autotest_common.sh@10 -- # set +x 00:13:20.778 ************************************ 00:13:20.778 START TEST nvmf_vfio_user_nvme_compliance 00:13:20.778 ************************************ 00:13:20.778 14:50:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:20.778 * Looking for test storage... 00:13:20.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:20.778 14:50:03 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:20.778 14:50:03 -- nvmf/common.sh@7 -- # uname -s 00:13:20.778 14:50:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.778 14:50:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.778 14:50:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.778 14:50:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.778 14:50:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.778 14:50:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.778 14:50:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.778 14:50:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.779 14:50:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.779 14:50:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.779 14:50:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:20.779 14:50:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:20.779 14:50:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.779 14:50:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.779 14:50:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:20.779 14:50:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.779 14:50:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:20.779 14:50:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.779 14:50:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.779 14:50:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.779 14:50:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.779 14:50:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.779 14:50:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.779 14:50:03 -- paths/export.sh@5 -- # export PATH 00:13:20.779 14:50:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.779 14:50:03 -- nvmf/common.sh@47 -- # : 0 00:13:20.779 14:50:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:20.779 14:50:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:20.779 14:50:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.779 14:50:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.779 14:50:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.779 14:50:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:20.779 14:50:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:20.779 14:50:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:20.779 14:50:03 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:20.779 14:50:03 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:20.779 14:50:03 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:20.779 14:50:03 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:20.780 14:50:03 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:20.780 14:50:03 -- compliance/compliance.sh@20 -- # nvmfpid=994367 00:13:20.780 14:50:03 -- compliance/compliance.sh@21 -- # echo 'Process pid: 994367' 00:13:20.780 Process pid: 994367 00:13:20.780 14:50:03 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:20.780 14:50:03 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:20.780 14:50:03 -- compliance/compliance.sh@24 -- # waitforlisten 994367 00:13:20.780 14:50:03 -- common/autotest_common.sh@817 -- # '[' -z 994367 ']' 00:13:20.780 14:50:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.780 14:50:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:20.780 14:50:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.780 14:50:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:20.780 14:50:03 -- common/autotest_common.sh@10 -- # set +x 00:13:20.780 [2024-04-26 14:50:03.421217] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:20.780 [2024-04-26 14:50:03.421285] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.042 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.042 [2024-04-26 14:50:03.489052] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:21.042 [2024-04-26 14:50:03.560481] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.042 [2024-04-26 14:50:03.560523] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.042 [2024-04-26 14:50:03.560531] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.042 [2024-04-26 14:50:03.560538] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.042 [2024-04-26 14:50:03.560544] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.042 [2024-04-26 14:50:03.560690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.042 [2024-04-26 14:50:03.560806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:21.042 [2024-04-26 14:50:03.560808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.612 14:50:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:21.612 14:50:04 -- common/autotest_common.sh@850 -- # return 0 00:13:21.612 14:50:04 -- compliance/compliance.sh@26 -- # sleep 1 00:13:22.552 14:50:05 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:22.552 14:50:05 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:22.552 14:50:05 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:22.552 14:50:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.552 14:50:05 -- common/autotest_common.sh@10 -- # set +x 00:13:22.812 14:50:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.812 14:50:05 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:22.812 14:50:05 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:22.812 14:50:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.812 14:50:05 -- common/autotest_common.sh@10 -- # set +x 00:13:22.812 malloc0 00:13:22.812 14:50:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.812 14:50:05 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:22.812 14:50:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.812 14:50:05 -- common/autotest_common.sh@10 -- # set +x 00:13:22.812 14:50:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.812 14:50:05 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:22.812 14:50:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.812 14:50:05 -- common/autotest_common.sh@10 -- # set +x 00:13:22.812 14:50:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.812 14:50:05 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:22.812 14:50:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:22.812 14:50:05 -- common/autotest_common.sh@10 -- # set +x 00:13:22.812 14:50:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:22.812 14:50:05 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:22.812 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.812 00:13:22.812 00:13:22.812 CUnit - A unit testing framework for C - Version 2.1-3 00:13:22.812 http://cunit.sourceforge.net/ 00:13:22.812 00:13:22.812 00:13:22.812 Suite: nvme_compliance 00:13:22.812 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-26 14:50:05.461793] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:22.812 [2024-04-26 14:50:05.463114] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:22.812 [2024-04-26 14:50:05.463126] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:22.812 [2024-04-26 14:50:05.463130] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:22.812 [2024-04-26 14:50:05.464812] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.072 passed 00:13:23.072 Test: admin_identify_ctrlr_verify_fused ...[2024-04-26 14:50:05.561397] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.072 [2024-04-26 14:50:05.564410] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.072 passed 00:13:23.072 Test: admin_identify_ns ...[2024-04-26 14:50:05.657081] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.072 [2024-04-26 14:50:05.720847] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:23.072 [2024-04-26 14:50:05.728851] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:23.332 [2024-04-26 14:50:05.749968] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.332 passed 00:13:23.332 Test: admin_get_features_mandatory_features ...[2024-04-26 14:50:05.840606] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.332 [2024-04-26 14:50:05.843623] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.332 passed 00:13:23.332 Test: admin_get_features_optional_features ...[2024-04-26 14:50:05.939138] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.332 [2024-04-26 14:50:05.942156] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.332 passed 00:13:23.593 Test: admin_set_features_number_of_queues ...[2024-04-26 14:50:06.035311] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.593 [2024-04-26 14:50:06.140943] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.593 passed 00:13:23.593 Test: admin_get_log_page_mandatory_logs ...[2024-04-26 14:50:06.233629] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.593 [2024-04-26 14:50:06.236648] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.855 passed 00:13:23.855 Test: admin_get_log_page_with_lpo ...[2024-04-26 14:50:06.328086] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.855 [2024-04-26 14:50:06.399850] ctrlr.c:2604:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:23.855 [2024-04-26 14:50:06.412901] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.855 passed 00:13:23.855 Test: fabric_property_get ...[2024-04-26 14:50:06.503540] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.855 [2024-04-26 14:50:06.504778] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:23.855 [2024-04-26 14:50:06.506560] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:24.115 passed 00:13:24.115 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-26 14:50:06.601085] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:24.115 [2024-04-26 14:50:06.602340] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:24.115 [2024-04-26 14:50:06.604112] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:24.115 passed 00:13:24.115 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-26 14:50:06.693239] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:24.115 [2024-04-26 14:50:06.776856] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:24.376 [2024-04-26 14:50:06.792846] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:24.376 [2024-04-26 14:50:06.797933] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:24.376 passed 00:13:24.376 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-26 14:50:06.891530] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:24.376 [2024-04-26 14:50:06.892754] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:24.376 [2024-04-26 14:50:06.894542] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:24.376 passed 00:13:24.376 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-26 14:50:06.988072] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:24.636 [2024-04-26 14:50:07.063846] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:24.636 [2024-04-26 14:50:07.087857] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:24.636 [2024-04-26 14:50:07.092928] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:24.636 passed 00:13:24.636 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-26 14:50:07.184909] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:24.636 [2024-04-26 14:50:07.186136] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:24.636 [2024-04-26 14:50:07.186156] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:24.636 [2024-04-26 14:50:07.187929] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:24.636 passed 00:13:24.636 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-26 14:50:07.280083] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:24.897 [2024-04-26 14:50:07.375855] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:24.897 [2024-04-26 14:50:07.383856] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:24.897 [2024-04-26 14:50:07.391848] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:24.897 [2024-04-26 14:50:07.399844] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:24.897 [2024-04-26 14:50:07.428926] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:24.897 passed 00:13:24.898 Test: admin_create_io_sq_verify_pc ...[2024-04-26 14:50:07.520486] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:24.898 [2024-04-26 14:50:07.536856] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:24.898 [2024-04-26 14:50:07.554655] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:25.158 passed 00:13:25.158 Test: admin_create_io_qp_max_qps ...[2024-04-26 14:50:07.646182] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:26.099 [2024-04-26 14:50:08.759850] nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:26.669 [2024-04-26 14:50:09.149063] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:26.669 passed 00:13:26.669 Test: admin_create_io_sq_shared_cq ...[2024-04-26 14:50:09.242360] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:26.928 [2024-04-26 14:50:09.373845] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:26.928 [2024-04-26 14:50:09.410919] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:26.928 passed 00:13:26.928 00:13:26.928 Run Summary: Type Total Ran Passed Failed Inactive 00:13:26.928 suites 1 1 n/a 0 0 00:13:26.928 tests 18 18 18 0 0 00:13:26.928 asserts 360 360 360 0 n/a 00:13:26.928 00:13:26.928 Elapsed time = 1.657 seconds 00:13:26.928 14:50:09 -- compliance/compliance.sh@42 -- # killprocess 994367 00:13:26.928 14:50:09 -- common/autotest_common.sh@936 -- # '[' -z 994367 ']' 00:13:26.928 14:50:09 -- common/autotest_common.sh@940 -- # kill -0 994367 00:13:26.928 14:50:09 -- common/autotest_common.sh@941 -- # uname 00:13:26.928 14:50:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:26.928 14:50:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 994367 00:13:26.928 14:50:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:26.928 14:50:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:26.928 14:50:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 994367' 00:13:26.928 killing process with pid 994367 00:13:26.928 14:50:09 -- common/autotest_common.sh@955 -- # kill 994367 00:13:26.928 14:50:09 -- common/autotest_common.sh@960 -- # wait 994367 00:13:27.187 14:50:09 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:27.187 14:50:09 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:27.187 00:13:27.187 real 0m6.435s 00:13:27.187 user 0m18.363s 00:13:27.187 sys 0m0.501s 00:13:27.187 14:50:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:27.187 14:50:09 -- common/autotest_common.sh@10 -- # set +x 00:13:27.187 ************************************ 00:13:27.187 END TEST nvmf_vfio_user_nvme_compliance 00:13:27.187 ************************************ 00:13:27.187 14:50:09 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:27.187 14:50:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:27.187 14:50:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:27.187 14:50:09 -- common/autotest_common.sh@10 -- # set +x 00:13:27.447 ************************************ 00:13:27.447 START TEST nvmf_vfio_user_fuzz 00:13:27.447 ************************************ 00:13:27.447 14:50:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:27.447 * Looking for test storage... 00:13:27.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:27.447 14:50:09 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.447 14:50:09 -- nvmf/common.sh@7 -- # uname -s 00:13:27.447 14:50:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.447 14:50:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.447 14:50:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.447 14:50:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.447 14:50:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.447 14:50:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.447 14:50:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.447 14:50:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.447 14:50:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.447 14:50:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.447 14:50:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:27.447 14:50:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:27.447 14:50:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.447 14:50:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.447 14:50:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.447 14:50:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.447 14:50:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:27.447 14:50:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.447 14:50:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.447 14:50:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.447 14:50:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.447 14:50:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.447 14:50:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.447 14:50:09 -- paths/export.sh@5 -- # export PATH 00:13:27.448 14:50:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.448 14:50:09 -- nvmf/common.sh@47 -- # : 0 00:13:27.448 14:50:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:27.448 14:50:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:27.448 14:50:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.448 14:50:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.448 14:50:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.448 14:50:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:27.448 14:50:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:27.448 14:50:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:27.448 14:50:09 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:27.448 14:50:09 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:27.448 14:50:09 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:27.448 14:50:09 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:27.448 14:50:09 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:27.448 14:50:09 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:27.448 14:50:09 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:27.448 14:50:09 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=995770 00:13:27.448 14:50:09 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 995770' 00:13:27.448 Process pid: 995770 00:13:27.448 14:50:09 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:27.448 14:50:09 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:27.448 14:50:09 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 995770 00:13:27.448 14:50:09 -- common/autotest_common.sh@817 -- # '[' -z 995770 ']' 00:13:27.448 14:50:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.448 14:50:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:27.448 14:50:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.448 14:50:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:27.448 14:50:09 -- common/autotest_common.sh@10 -- # set +x 00:13:28.385 14:50:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:28.385 14:50:10 -- common/autotest_common.sh@850 -- # return 0 00:13:28.385 14:50:10 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:29.324 14:50:11 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:29.325 14:50:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:29.325 14:50:11 -- common/autotest_common.sh@10 -- # set +x 00:13:29.325 14:50:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:29.325 14:50:11 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:29.325 14:50:11 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:29.325 14:50:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:29.325 14:50:11 -- common/autotest_common.sh@10 -- # set +x 00:13:29.325 malloc0 00:13:29.325 14:50:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:29.325 14:50:11 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:29.325 14:50:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:29.325 14:50:11 -- common/autotest_common.sh@10 -- # set +x 00:13:29.325 14:50:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:29.325 14:50:11 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:29.325 14:50:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:29.325 14:50:11 -- common/autotest_common.sh@10 -- # set +x 00:13:29.325 14:50:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:29.325 14:50:11 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:29.325 14:50:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:29.325 14:50:11 -- common/autotest_common.sh@10 -- # set +x 00:13:29.325 14:50:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:29.325 14:50:11 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:29.325 14:50:11 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:01.504 Fuzzing completed. Shutting down the fuzz application 00:14:01.504 00:14:01.504 Dumping successful admin opcodes: 00:14:01.504 8, 9, 10, 24, 00:14:01.504 Dumping successful io opcodes: 00:14:01.504 0, 00:14:01.504 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1163793, total successful commands: 4578, random_seed: 1192938624 00:14:01.504 NS: 0x200003a1ef00 admin qp, Total commands completed: 146172, total successful commands: 1186, random_seed: 903343488 00:14:01.504 14:50:43 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:01.504 14:50:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:01.504 14:50:43 -- common/autotest_common.sh@10 -- # set +x 00:14:01.504 14:50:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:01.504 14:50:43 -- target/vfio_user_fuzz.sh@46 -- # killprocess 995770 00:14:01.504 14:50:43 -- common/autotest_common.sh@936 -- # '[' -z 995770 ']' 00:14:01.504 14:50:43 -- common/autotest_common.sh@940 -- # kill -0 995770 00:14:01.504 14:50:43 -- common/autotest_common.sh@941 -- # uname 00:14:01.504 14:50:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:01.504 14:50:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 995770 00:14:01.504 14:50:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:01.504 14:50:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:01.504 14:50:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 995770' 00:14:01.504 killing process with pid 995770 00:14:01.504 14:50:43 -- common/autotest_common.sh@955 -- # kill 995770 00:14:01.504 14:50:43 -- common/autotest_common.sh@960 -- # wait 995770 00:14:01.504 14:50:43 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:01.504 14:50:43 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:01.504 00:14:01.504 real 0m33.693s 00:14:01.504 user 0m40.175s 00:14:01.504 sys 0m22.406s 00:14:01.504 14:50:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:01.504 14:50:43 -- common/autotest_common.sh@10 -- # set +x 00:14:01.504 ************************************ 00:14:01.504 END TEST nvmf_vfio_user_fuzz 00:14:01.504 ************************************ 00:14:01.504 14:50:43 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:01.504 14:50:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:01.504 14:50:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:01.504 14:50:43 -- common/autotest_common.sh@10 -- # set +x 00:14:01.504 ************************************ 00:14:01.504 START TEST nvmf_host_management 00:14:01.504 ************************************ 00:14:01.504 14:50:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:01.504 * Looking for test storage... 00:14:01.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:01.504 14:50:43 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:01.504 14:50:43 -- nvmf/common.sh@7 -- # uname -s 00:14:01.504 14:50:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.504 14:50:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.504 14:50:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.505 14:50:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.505 14:50:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.505 14:50:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.505 14:50:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.505 14:50:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.505 14:50:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.505 14:50:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.505 14:50:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:01.505 14:50:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:01.505 14:50:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.505 14:50:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.505 14:50:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:01.505 14:50:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.505 14:50:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:01.505 14:50:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.505 14:50:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.505 14:50:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.505 14:50:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.505 14:50:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.505 14:50:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.505 14:50:43 -- paths/export.sh@5 -- # export PATH 00:14:01.505 14:50:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.505 14:50:43 -- nvmf/common.sh@47 -- # : 0 00:14:01.505 14:50:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:01.505 14:50:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:01.505 14:50:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.505 14:50:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.505 14:50:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.505 14:50:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:01.505 14:50:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:01.505 14:50:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:01.505 14:50:43 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:01.505 14:50:43 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:01.505 14:50:43 -- target/host_management.sh@105 -- # nvmftestinit 00:14:01.505 14:50:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:01.505 14:50:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:01.505 14:50:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:01.505 14:50:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:01.505 14:50:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:01.505 14:50:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.505 14:50:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.505 14:50:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.505 14:50:43 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:01.505 14:50:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:01.505 14:50:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:01.505 14:50:43 -- common/autotest_common.sh@10 -- # set +x 00:14:08.108 14:50:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:08.108 14:50:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:08.108 14:50:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:08.108 14:50:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:08.109 14:50:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:08.109 14:50:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:08.109 14:50:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:08.109 14:50:50 -- nvmf/common.sh@295 -- # net_devs=() 00:14:08.109 14:50:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:08.109 14:50:50 -- nvmf/common.sh@296 -- # e810=() 00:14:08.109 14:50:50 -- nvmf/common.sh@296 -- # local -ga e810 00:14:08.109 14:50:50 -- nvmf/common.sh@297 -- # x722=() 00:14:08.109 14:50:50 -- nvmf/common.sh@297 -- # local -ga x722 00:14:08.109 14:50:50 -- nvmf/common.sh@298 -- # mlx=() 00:14:08.109 14:50:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:08.109 14:50:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:08.109 14:50:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:08.109 14:50:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:08.109 14:50:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:08.109 14:50:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:08.109 14:50:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:08.109 14:50:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:08.109 14:50:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:08.109 14:50:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:08.109 14:50:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:08.109 14:50:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:08.109 14:50:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:08.109 14:50:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:08.109 14:50:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:08.109 14:50:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:08.109 14:50:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:08.109 14:50:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:08.109 14:50:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:08.109 14:50:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:08.109 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:08.109 14:50:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:08.109 14:50:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:08.109 14:50:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:08.109 14:50:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:08.109 14:50:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:08.109 14:50:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:08.109 14:50:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:08.109 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:08.109 14:50:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:08.109 14:50:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:08.109 14:50:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:08.109 14:50:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:08.109 14:50:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:08.109 14:50:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:08.109 14:50:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:08.109 14:50:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:08.109 14:50:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:08.109 14:50:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:08.109 14:50:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:08.109 14:50:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:08.109 14:50:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:08.109 Found net devices under 0000:31:00.0: cvl_0_0 00:14:08.109 14:50:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:08.109 14:50:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:08.109 14:50:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:08.109 14:50:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:08.109 14:50:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:08.109 14:50:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:08.109 Found net devices under 0000:31:00.1: cvl_0_1 00:14:08.109 14:50:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:08.109 14:50:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:08.109 14:50:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:08.109 14:50:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:08.109 14:50:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:08.109 14:50:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:08.109 14:50:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:08.109 14:50:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:08.109 14:50:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:08.109 14:50:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:08.109 14:50:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:08.109 14:50:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:08.109 14:50:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:08.109 14:50:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:08.109 14:50:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:08.109 14:50:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:08.109 14:50:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:08.109 14:50:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:08.109 14:50:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:08.370 14:50:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:08.370 14:50:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:08.370 14:50:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:08.370 14:50:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:08.370 14:50:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:08.370 14:50:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:08.370 14:50:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:08.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:08.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:14:08.370 00:14:08.370 --- 10.0.0.2 ping statistics --- 00:14:08.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.370 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:14:08.370 14:50:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:08.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:08.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:14:08.370 00:14:08.370 --- 10.0.0.1 ping statistics --- 00:14:08.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.370 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:14:08.370 14:50:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:08.370 14:50:51 -- nvmf/common.sh@411 -- # return 0 00:14:08.370 14:50:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:08.370 14:50:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:08.370 14:50:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:08.370 14:50:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:08.370 14:50:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:08.370 14:50:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:08.370 14:50:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:08.630 14:50:51 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:14:08.630 14:50:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:08.630 14:50:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:08.630 14:50:51 -- common/autotest_common.sh@10 -- # set +x 00:14:08.630 ************************************ 00:14:08.630 START TEST nvmf_host_management 00:14:08.630 ************************************ 00:14:08.630 14:50:51 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:14:08.630 14:50:51 -- target/host_management.sh@69 -- # starttarget 00:14:08.630 14:50:51 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:08.630 14:50:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:08.630 14:50:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:08.630 14:50:51 -- common/autotest_common.sh@10 -- # set +x 00:14:08.630 14:50:51 -- nvmf/common.sh@470 -- # nvmfpid=1006028 00:14:08.630 14:50:51 -- nvmf/common.sh@471 -- # waitforlisten 1006028 00:14:08.630 14:50:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:08.630 14:50:51 -- common/autotest_common.sh@817 -- # '[' -z 1006028 ']' 00:14:08.630 14:50:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.630 14:50:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:08.630 14:50:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.630 14:50:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:08.630 14:50:51 -- common/autotest_common.sh@10 -- # set +x 00:14:08.630 [2024-04-26 14:50:51.276139] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:08.630 [2024-04-26 14:50:51.276194] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.890 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.890 [2024-04-26 14:50:51.364968] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:08.890 [2024-04-26 14:50:51.460143] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.890 [2024-04-26 14:50:51.460207] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.890 [2024-04-26 14:50:51.460215] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.890 [2024-04-26 14:50:51.460222] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.890 [2024-04-26 14:50:51.460229] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.890 [2024-04-26 14:50:51.460359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:08.890 [2024-04-26 14:50:51.460526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:08.890 [2024-04-26 14:50:51.460567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.890 [2024-04-26 14:50:51.460567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:09.461 14:50:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:09.461 14:50:52 -- common/autotest_common.sh@850 -- # return 0 00:14:09.461 14:50:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:09.461 14:50:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:09.461 14:50:52 -- common/autotest_common.sh@10 -- # set +x 00:14:09.461 14:50:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.461 14:50:52 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:09.461 14:50:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.461 14:50:52 -- common/autotest_common.sh@10 -- # set +x 00:14:09.461 [2024-04-26 14:50:52.108307] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.461 14:50:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.461 14:50:52 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:09.461 14:50:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:09.461 14:50:52 -- common/autotest_common.sh@10 -- # set +x 00:14:09.461 14:50:52 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:09.721 14:50:52 -- target/host_management.sh@23 -- # cat 00:14:09.721 14:50:52 -- target/host_management.sh@30 -- # rpc_cmd 00:14:09.721 14:50:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.721 14:50:52 -- common/autotest_common.sh@10 -- # set +x 00:14:09.721 Malloc0 00:14:09.721 [2024-04-26 14:50:52.167548] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:09.721 14:50:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.721 14:50:52 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:09.721 14:50:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:09.721 14:50:52 -- common/autotest_common.sh@10 -- # set +x 00:14:09.721 14:50:52 -- target/host_management.sh@73 -- # perfpid=1006214 00:14:09.721 14:50:52 -- target/host_management.sh@74 -- # waitforlisten 1006214 /var/tmp/bdevperf.sock 00:14:09.721 14:50:52 -- common/autotest_common.sh@817 -- # '[' -z 1006214 ']' 00:14:09.721 14:50:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:09.721 14:50:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:09.721 14:50:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:09.721 14:50:52 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:09.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:09.721 14:50:52 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:09.721 14:50:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:09.721 14:50:52 -- common/autotest_common.sh@10 -- # set +x 00:14:09.721 14:50:52 -- nvmf/common.sh@521 -- # config=() 00:14:09.721 14:50:52 -- nvmf/common.sh@521 -- # local subsystem config 00:14:09.721 14:50:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:09.721 14:50:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:09.721 { 00:14:09.721 "params": { 00:14:09.721 "name": "Nvme$subsystem", 00:14:09.721 "trtype": "$TEST_TRANSPORT", 00:14:09.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:09.721 "adrfam": "ipv4", 00:14:09.721 "trsvcid": "$NVMF_PORT", 00:14:09.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:09.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:09.721 "hdgst": ${hdgst:-false}, 00:14:09.721 "ddgst": ${ddgst:-false} 00:14:09.721 }, 00:14:09.721 "method": "bdev_nvme_attach_controller" 00:14:09.721 } 00:14:09.721 EOF 00:14:09.721 )") 00:14:09.721 14:50:52 -- nvmf/common.sh@543 -- # cat 00:14:09.721 14:50:52 -- nvmf/common.sh@545 -- # jq . 00:14:09.721 14:50:52 -- nvmf/common.sh@546 -- # IFS=, 00:14:09.721 14:50:52 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:09.721 "params": { 00:14:09.721 "name": "Nvme0", 00:14:09.721 "trtype": "tcp", 00:14:09.721 "traddr": "10.0.0.2", 00:14:09.721 "adrfam": "ipv4", 00:14:09.721 "trsvcid": "4420", 00:14:09.721 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:09.721 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:09.721 "hdgst": false, 00:14:09.721 "ddgst": false 00:14:09.721 }, 00:14:09.721 "method": "bdev_nvme_attach_controller" 00:14:09.721 }' 00:14:09.721 [2024-04-26 14:50:52.264391] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:09.721 [2024-04-26 14:50:52.264441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1006214 ] 00:14:09.721 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.721 [2024-04-26 14:50:52.324506] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.981 [2024-04-26 14:50:52.387498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.242 Running I/O for 10 seconds... 00:14:10.505 14:50:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:10.505 14:50:53 -- common/autotest_common.sh@850 -- # return 0 00:14:10.505 14:50:53 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:10.505 14:50:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:10.505 14:50:53 -- common/autotest_common.sh@10 -- # set +x 00:14:10.505 14:50:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:10.505 14:50:53 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:10.505 14:50:53 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:10.505 14:50:53 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:10.505 14:50:53 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:10.505 14:50:53 -- target/host_management.sh@52 -- # local ret=1 00:14:10.505 14:50:53 -- target/host_management.sh@53 -- # local i 00:14:10.505 14:50:53 -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:10.505 14:50:53 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:10.505 14:50:53 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:10.505 14:50:53 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:10.505 14:50:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:10.505 14:50:53 -- common/autotest_common.sh@10 -- # set +x 00:14:10.505 14:50:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:10.505 14:50:53 -- target/host_management.sh@55 -- # read_io_count=515 00:14:10.505 14:50:53 -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:14:10.505 14:50:53 -- target/host_management.sh@59 -- # ret=0 00:14:10.505 14:50:53 -- target/host_management.sh@60 -- # break 00:14:10.505 14:50:53 -- target/host_management.sh@64 -- # return 0 00:14:10.505 14:50:53 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:10.505 14:50:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:10.505 14:50:53 -- common/autotest_common.sh@10 -- # set +x 00:14:10.505 [2024-04-26 14:50:53.122613] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122659] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122700] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122707] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122713] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122719] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122726] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122732] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122745] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122751] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122757] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122764] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122770] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122776] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122783] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122789] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122795] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122802] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122813] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122819] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122826] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122832] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122843] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122856] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122862] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122868] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122874] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122880] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122886] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122899] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122905] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122912] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122930] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122936] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122949] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122955] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122961] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122967] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122973] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122980] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.122994] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.123000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.123006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.123012] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.123018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.123025] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.123031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.123037] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.123043] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.505 [2024-04-26 14:50:53.123050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.506 [2024-04-26 14:50:53.123056] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf090 is same with the state(5) to be set 00:14:10.506 [2024-04-26 14:50:53.123306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.506 [2024-04-26 14:50:53.123989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.506 [2024-04-26 14:50:53.123998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.507 [2024-04-26 14:50:53.124006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.507 [2024-04-26 14:50:53.124015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.507 [2024-04-26 14:50:53.124023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.507 [2024-04-26 14:50:53.124032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.507 [2024-04-26 14:50:53.124040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.507 [2024-04-26 14:50:53.124050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.507 [2024-04-26 14:50:53.124057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.507 [2024-04-26 14:50:53.124067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.507 [2024-04-26 14:50:53.124074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.507 [2024-04-26 14:50:53.124084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.507 [2024-04-26 14:50:53.124091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.507 [2024-04-26 14:50:53.124101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.507 [2024-04-26 14:50:53.124111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.507 [2024-04-26 14:50:53.124121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.507 [2024-04-26 14:50:53.124128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.507 [2024-04-26 14:50:53.124138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.507 [2024-04-26 14:50:53.124145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.507 [2024-04-26 14:50:53.124155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.507 [2024-04-26 14:50:53.124163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.507 [2024-04-26 14:50:53.124172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.507 [2024-04-26 14:50:53.124180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.507 [2024-04-26 14:50:53.124189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.507 [2024-04-26 14:50:53.124197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.507 [2024-04-26 14:50:53.124206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.507 [2024-04-26 14:50:53.124214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.507 [2024-04-26 14:50:53.124223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.507 [2024-04-26 14:50:53.124231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.507 [2024-04-26 14:50:53.124240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.507 [2024-04-26 14:50:53.124248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.507 [2024-04-26 14:50:53.124257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.507 [2024-04-26 14:50:53.124265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.507 [2024-04-26 14:50:53.124275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.507 [2024-04-26 14:50:53.124282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.507 [2024-04-26 14:50:53.124292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.507 [2024-04-26 14:50:53.124299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.507 [2024-04-26 14:50:53.124308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.507 [2024-04-26 14:50:53.124316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.507 [2024-04-26 14:50:53.124326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.507 [2024-04-26 14:50:53.124335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.507 [2024-04-26 14:50:53.124345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.507 [2024-04-26 14:50:53.124352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.507 [2024-04-26 14:50:53.124362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.507 [2024-04-26 14:50:53.124369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.507 [2024-04-26 14:50:53.124379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.507 [2024-04-26 14:50:53.124386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.507 [2024-04-26 14:50:53.124396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.507 [2024-04-26 14:50:53.124404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.507 [2024-04-26 14:50:53.124413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.507 [2024-04-26 14:50:53.124421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.507 [2024-04-26 14:50:53.124430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.507 [2024-04-26 14:50:53.124438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.507 [2024-04-26 14:50:53.124447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0b0 is same with the state(5) to be set 00:14:10.507 [2024-04-26 14:50:53.124489] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf8a0b0 was disconnected and freed. reset controller. 00:14:10.507 [2024-04-26 14:50:53.125704] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:10.507 task offset: 73728 on job bdev=Nvme0n1 fails 00:14:10.507 00:14:10.507 Latency(us) 00:14:10.507 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.507 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:10.507 Job: Nvme0n1 ended in about 0.43 seconds with error 00:14:10.507 Verification LBA range: start 0x0 length 0x400 00:14:10.507 Nvme0n1 : 0.43 1339.41 83.71 148.82 0.00 41751.02 6553.60 36263.25 00:14:10.507 =================================================================================================================== 00:14:10.507 Total : 1339.41 83.71 148.82 0.00 41751.02 6553.60 36263.25 00:14:10.507 14:50:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:10.507 [2024-04-26 14:50:53.127762] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:10.507 [2024-04-26 14:50:53.127787] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb79620 (9): Bad file descriptor 00:14:10.507 14:50:53 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:10.507 14:50:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:10.507 14:50:53 -- common/autotest_common.sh@10 -- # set +x 00:14:10.507 [2024-04-26 14:50:53.130332] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:14:10.507 [2024-04-26 14:50:53.130409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:10.507 [2024-04-26 14:50:53.130430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.507 [2024-04-26 14:50:53.130444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:14:10.507 [2024-04-26 14:50:53.130452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:14:10.507 [2024-04-26 14:50:53.130459] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:14:10.507 [2024-04-26 14:50:53.130466] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb79620 00:14:10.507 [2024-04-26 14:50:53.130484] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb79620 (9): Bad file descriptor 00:14:10.507 [2024-04-26 14:50:53.130495] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:14:10.507 [2024-04-26 14:50:53.130502] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:14:10.507 [2024-04-26 14:50:53.130510] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:14:10.507 [2024-04-26 14:50:53.130522] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:10.507 14:50:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:10.507 14:50:53 -- target/host_management.sh@87 -- # sleep 1 00:14:11.890 14:50:54 -- target/host_management.sh@91 -- # kill -9 1006214 00:14:11.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1006214) - No such process 00:14:11.890 14:50:54 -- target/host_management.sh@91 -- # true 00:14:11.890 14:50:54 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:11.890 14:50:54 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:11.890 14:50:54 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:11.890 14:50:54 -- nvmf/common.sh@521 -- # config=() 00:14:11.890 14:50:54 -- nvmf/common.sh@521 -- # local subsystem config 00:14:11.890 14:50:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:11.890 14:50:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:11.890 { 00:14:11.890 "params": { 00:14:11.890 "name": "Nvme$subsystem", 00:14:11.890 "trtype": "$TEST_TRANSPORT", 00:14:11.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:11.890 "adrfam": "ipv4", 00:14:11.890 "trsvcid": "$NVMF_PORT", 00:14:11.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:11.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:11.890 "hdgst": ${hdgst:-false}, 00:14:11.890 "ddgst": ${ddgst:-false} 00:14:11.890 }, 00:14:11.890 "method": "bdev_nvme_attach_controller" 00:14:11.890 } 00:14:11.890 EOF 00:14:11.890 )") 00:14:11.890 14:50:54 -- nvmf/common.sh@543 -- # cat 00:14:11.890 14:50:54 -- nvmf/common.sh@545 -- # jq . 00:14:11.890 14:50:54 -- nvmf/common.sh@546 -- # IFS=, 00:14:11.890 14:50:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:11.890 "params": { 00:14:11.890 "name": "Nvme0", 00:14:11.890 "trtype": "tcp", 00:14:11.890 "traddr": "10.0.0.2", 00:14:11.890 "adrfam": "ipv4", 00:14:11.890 "trsvcid": "4420", 00:14:11.890 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:11.890 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:11.890 "hdgst": false, 00:14:11.890 "ddgst": false 00:14:11.890 }, 00:14:11.890 "method": "bdev_nvme_attach_controller" 00:14:11.890 }' 00:14:11.890 [2024-04-26 14:50:54.194886] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:11.890 [2024-04-26 14:50:54.194940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1006568 ] 00:14:11.890 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.890 [2024-04-26 14:50:54.254718] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.890 [2024-04-26 14:50:54.316777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.150 Running I/O for 1 seconds... 00:14:13.091 00:14:13.091 Latency(us) 00:14:13.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.091 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:13.091 Verification LBA range: start 0x0 length 0x400 00:14:13.091 Nvme0n1 : 1.01 1651.53 103.22 0.00 0.00 38071.06 5761.71 36044.80 00:14:13.091 =================================================================================================================== 00:14:13.091 Total : 1651.53 103.22 0.00 0.00 38071.06 5761.71 36044.80 00:14:13.351 14:50:55 -- target/host_management.sh@102 -- # stoptarget 00:14:13.351 14:50:55 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:13.351 14:50:55 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:13.351 14:50:55 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:13.351 14:50:55 -- target/host_management.sh@40 -- # nvmftestfini 00:14:13.351 14:50:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:13.351 14:50:55 -- nvmf/common.sh@117 -- # sync 00:14:13.351 14:50:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:13.351 14:50:55 -- nvmf/common.sh@120 -- # set +e 00:14:13.351 14:50:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:13.351 14:50:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:13.351 rmmod nvme_tcp 00:14:13.351 rmmod nvme_fabrics 00:14:13.351 rmmod nvme_keyring 00:14:13.351 14:50:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:13.351 14:50:55 -- nvmf/common.sh@124 -- # set -e 00:14:13.351 14:50:55 -- nvmf/common.sh@125 -- # return 0 00:14:13.351 14:50:55 -- nvmf/common.sh@478 -- # '[' -n 1006028 ']' 00:14:13.351 14:50:55 -- nvmf/common.sh@479 -- # killprocess 1006028 00:14:13.351 14:50:55 -- common/autotest_common.sh@936 -- # '[' -z 1006028 ']' 00:14:13.351 14:50:55 -- common/autotest_common.sh@940 -- # kill -0 1006028 00:14:13.351 14:50:55 -- common/autotest_common.sh@941 -- # uname 00:14:13.351 14:50:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:13.351 14:50:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1006028 00:14:13.351 14:50:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:13.351 14:50:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:13.351 14:50:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1006028' 00:14:13.351 killing process with pid 1006028 00:14:13.351 14:50:55 -- common/autotest_common.sh@955 -- # kill 1006028 00:14:13.351 14:50:55 -- common/autotest_common.sh@960 -- # wait 1006028 00:14:13.351 [2024-04-26 14:50:56.000443] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:13.611 14:50:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:13.611 14:50:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:13.611 14:50:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:13.611 14:50:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:13.611 14:50:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:13.611 14:50:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.611 14:50:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.611 14:50:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.521 14:50:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:15.521 00:14:15.521 real 0m6.881s 00:14:15.521 user 0m20.876s 00:14:15.521 sys 0m1.030s 00:14:15.521 14:50:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:15.521 14:50:58 -- common/autotest_common.sh@10 -- # set +x 00:14:15.521 ************************************ 00:14:15.521 END TEST nvmf_host_management 00:14:15.521 ************************************ 00:14:15.521 14:50:58 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:15.521 00:14:15.521 real 0m14.404s 00:14:15.521 user 0m22.963s 00:14:15.521 sys 0m6.379s 00:14:15.521 14:50:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:15.521 14:50:58 -- common/autotest_common.sh@10 -- # set +x 00:14:15.521 ************************************ 00:14:15.521 END TEST nvmf_host_management 00:14:15.521 ************************************ 00:14:15.521 14:50:58 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:15.521 14:50:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:15.521 14:50:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:15.521 14:50:58 -- common/autotest_common.sh@10 -- # set +x 00:14:15.781 ************************************ 00:14:15.781 START TEST nvmf_lvol 00:14:15.781 ************************************ 00:14:15.781 14:50:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:15.781 * Looking for test storage... 00:14:15.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:15.781 14:50:58 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.781 14:50:58 -- nvmf/common.sh@7 -- # uname -s 00:14:16.041 14:50:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.041 14:50:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.041 14:50:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.041 14:50:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.041 14:50:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:16.041 14:50:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:16.041 14:50:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.041 14:50:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:16.042 14:50:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.042 14:50:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:16.042 14:50:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:16.042 14:50:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:16.042 14:50:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.042 14:50:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:16.042 14:50:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:16.042 14:50:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:16.042 14:50:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:16.042 14:50:58 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.042 14:50:58 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.042 14:50:58 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.042 14:50:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.042 14:50:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.042 14:50:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.042 14:50:58 -- paths/export.sh@5 -- # export PATH 00:14:16.042 14:50:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.042 14:50:58 -- nvmf/common.sh@47 -- # : 0 00:14:16.042 14:50:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:16.042 14:50:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:16.042 14:50:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:16.042 14:50:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.042 14:50:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.042 14:50:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:16.042 14:50:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:16.042 14:50:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:16.042 14:50:58 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:16.042 14:50:58 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:16.042 14:50:58 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:16.042 14:50:58 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:16.042 14:50:58 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:16.042 14:50:58 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:16.042 14:50:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:16.042 14:50:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:16.042 14:50:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:16.042 14:50:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:16.042 14:50:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:16.042 14:50:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.042 14:50:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:16.042 14:50:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.042 14:50:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:16.042 14:50:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:16.042 14:50:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:16.042 14:50:58 -- common/autotest_common.sh@10 -- # set +x 00:14:24.184 14:51:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:24.184 14:51:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:24.184 14:51:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:24.184 14:51:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:24.184 14:51:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:24.184 14:51:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:24.184 14:51:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:24.184 14:51:05 -- nvmf/common.sh@295 -- # net_devs=() 00:14:24.184 14:51:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:24.184 14:51:05 -- nvmf/common.sh@296 -- # e810=() 00:14:24.184 14:51:05 -- nvmf/common.sh@296 -- # local -ga e810 00:14:24.184 14:51:05 -- nvmf/common.sh@297 -- # x722=() 00:14:24.184 14:51:05 -- nvmf/common.sh@297 -- # local -ga x722 00:14:24.184 14:51:05 -- nvmf/common.sh@298 -- # mlx=() 00:14:24.184 14:51:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:24.184 14:51:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:24.184 14:51:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:24.184 14:51:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:24.184 14:51:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:24.184 14:51:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:24.184 14:51:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:24.184 14:51:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:24.184 14:51:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:24.184 14:51:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:24.184 14:51:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:24.184 14:51:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:24.184 14:51:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:24.184 14:51:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:24.184 14:51:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:24.184 14:51:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:24.184 14:51:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:24.184 14:51:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:24.184 14:51:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.184 14:51:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:24.184 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:24.184 14:51:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:24.184 14:51:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:24.184 14:51:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.184 14:51:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.184 14:51:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:24.184 14:51:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.184 14:51:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:24.184 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:24.184 14:51:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:24.184 14:51:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:24.184 14:51:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.184 14:51:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.184 14:51:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:24.184 14:51:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:24.184 14:51:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:24.184 14:51:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:24.184 14:51:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.184 14:51:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.184 14:51:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:24.184 14:51:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.184 14:51:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:24.184 Found net devices under 0000:31:00.0: cvl_0_0 00:14:24.184 14:51:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.184 14:51:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.184 14:51:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.184 14:51:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:24.184 14:51:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.184 14:51:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:24.184 Found net devices under 0000:31:00.1: cvl_0_1 00:14:24.184 14:51:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.184 14:51:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:24.184 14:51:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:24.184 14:51:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:24.184 14:51:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:24.184 14:51:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:24.184 14:51:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:24.184 14:51:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:24.184 14:51:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:24.184 14:51:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:24.184 14:51:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:24.184 14:51:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:24.184 14:51:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:24.184 14:51:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:24.184 14:51:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:24.184 14:51:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:24.184 14:51:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:24.184 14:51:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:24.184 14:51:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:24.184 14:51:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:24.184 14:51:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:24.184 14:51:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:24.184 14:51:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:24.184 14:51:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:24.184 14:51:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:24.184 14:51:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:24.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:24.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:14:24.184 00:14:24.184 --- 10.0.0.2 ping statistics --- 00:14:24.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.184 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:14:24.184 14:51:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:24.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:24.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:14:24.184 00:14:24.184 --- 10.0.0.1 ping statistics --- 00:14:24.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.184 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:14:24.184 14:51:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:24.184 14:51:05 -- nvmf/common.sh@411 -- # return 0 00:14:24.184 14:51:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:24.184 14:51:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:24.184 14:51:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:24.184 14:51:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:24.184 14:51:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:24.184 14:51:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:24.184 14:51:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:24.184 14:51:05 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:24.184 14:51:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:24.184 14:51:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:24.184 14:51:05 -- common/autotest_common.sh@10 -- # set +x 00:14:24.184 14:51:05 -- nvmf/common.sh@470 -- # nvmfpid=1011426 00:14:24.184 14:51:05 -- nvmf/common.sh@471 -- # waitforlisten 1011426 00:14:24.184 14:51:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:24.184 14:51:05 -- common/autotest_common.sh@817 -- # '[' -z 1011426 ']' 00:14:24.184 14:51:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.184 14:51:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:24.184 14:51:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.184 14:51:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:24.184 14:51:05 -- common/autotest_common.sh@10 -- # set +x 00:14:24.184 [2024-04-26 14:51:05.965963] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:24.184 [2024-04-26 14:51:05.966015] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.184 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.184 [2024-04-26 14:51:06.033611] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:24.184 [2024-04-26 14:51:06.097781] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:24.184 [2024-04-26 14:51:06.097820] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:24.184 [2024-04-26 14:51:06.097828] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:24.184 [2024-04-26 14:51:06.097834] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:24.184 [2024-04-26 14:51:06.097847] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:24.184 [2024-04-26 14:51:06.097914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.184 [2024-04-26 14:51:06.098052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:24.185 [2024-04-26 14:51:06.098143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.185 14:51:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:24.185 14:51:06 -- common/autotest_common.sh@850 -- # return 0 00:14:24.185 14:51:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:24.185 14:51:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:24.185 14:51:06 -- common/autotest_common.sh@10 -- # set +x 00:14:24.185 14:51:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:24.185 14:51:06 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:24.446 [2024-04-26 14:51:06.901921] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:24.446 14:51:06 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:24.707 14:51:07 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:24.707 14:51:07 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:24.707 14:51:07 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:24.708 14:51:07 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:24.969 14:51:07 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:25.230 14:51:07 -- target/nvmf_lvol.sh@29 -- # lvs=e4f8b7dd-5241-4ee3-ac22-7903406ec5fd 00:14:25.230 14:51:07 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e4f8b7dd-5241-4ee3-ac22-7903406ec5fd lvol 20 00:14:25.230 14:51:07 -- target/nvmf_lvol.sh@32 -- # lvol=73e3066d-c315-4b57-926c-5f402ca4ae2c 00:14:25.230 14:51:07 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:25.490 14:51:07 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 73e3066d-c315-4b57-926c-5f402ca4ae2c 00:14:25.490 14:51:08 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:25.751 [2024-04-26 14:51:08.286744] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.751 14:51:08 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:26.012 14:51:08 -- target/nvmf_lvol.sh@42 -- # perf_pid=1012108 00:14:26.012 14:51:08 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:26.012 14:51:08 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:26.012 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.956 14:51:09 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 73e3066d-c315-4b57-926c-5f402ca4ae2c MY_SNAPSHOT 00:14:27.216 14:51:09 -- target/nvmf_lvol.sh@47 -- # snapshot=afc73bd8-5ca2-415f-a657-a9fa57e03b0a 00:14:27.216 14:51:09 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 73e3066d-c315-4b57-926c-5f402ca4ae2c 30 00:14:27.477 14:51:09 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone afc73bd8-5ca2-415f-a657-a9fa57e03b0a MY_CLONE 00:14:27.477 14:51:10 -- target/nvmf_lvol.sh@49 -- # clone=0b6d80a5-bcc7-4246-8746-daa37cae9b06 00:14:27.477 14:51:10 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0b6d80a5-bcc7-4246-8746-daa37cae9b06 00:14:28.048 14:51:10 -- target/nvmf_lvol.sh@53 -- # wait 1012108 00:14:36.183 Initializing NVMe Controllers 00:14:36.183 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:36.183 Controller IO queue size 128, less than required. 00:14:36.183 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:36.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:36.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:36.183 Initialization complete. Launching workers. 00:14:36.183 ======================================================== 00:14:36.183 Latency(us) 00:14:36.183 Device Information : IOPS MiB/s Average min max 00:14:36.183 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17175.99 67.09 7455.82 817.80 64389.41 00:14:36.183 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11896.81 46.47 10766.23 3916.92 51444.13 00:14:36.183 ======================================================== 00:14:36.183 Total : 29072.80 113.57 8810.46 817.80 64389.41 00:14:36.183 00:14:36.183 14:51:18 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:36.183 14:51:18 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 73e3066d-c315-4b57-926c-5f402ca4ae2c 00:14:36.444 14:51:18 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e4f8b7dd-5241-4ee3-ac22-7903406ec5fd 00:14:36.705 14:51:19 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:36.705 14:51:19 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:36.705 14:51:19 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:36.705 14:51:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:36.705 14:51:19 -- nvmf/common.sh@117 -- # sync 00:14:36.705 14:51:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:36.705 14:51:19 -- nvmf/common.sh@120 -- # set +e 00:14:36.705 14:51:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:36.705 14:51:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:36.705 rmmod nvme_tcp 00:14:36.705 rmmod nvme_fabrics 00:14:36.705 rmmod nvme_keyring 00:14:36.705 14:51:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:36.705 14:51:19 -- nvmf/common.sh@124 -- # set -e 00:14:36.705 14:51:19 -- nvmf/common.sh@125 -- # return 0 00:14:36.705 14:51:19 -- nvmf/common.sh@478 -- # '[' -n 1011426 ']' 00:14:36.705 14:51:19 -- nvmf/common.sh@479 -- # killprocess 1011426 00:14:36.705 14:51:19 -- common/autotest_common.sh@936 -- # '[' -z 1011426 ']' 00:14:36.705 14:51:19 -- common/autotest_common.sh@940 -- # kill -0 1011426 00:14:36.705 14:51:19 -- common/autotest_common.sh@941 -- # uname 00:14:36.705 14:51:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:36.705 14:51:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1011426 00:14:36.705 14:51:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:36.705 14:51:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:36.705 14:51:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1011426' 00:14:36.705 killing process with pid 1011426 00:14:36.705 14:51:19 -- common/autotest_common.sh@955 -- # kill 1011426 00:14:36.705 14:51:19 -- common/autotest_common.sh@960 -- # wait 1011426 00:14:36.966 14:51:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:36.966 14:51:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:36.966 14:51:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:36.966 14:51:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:36.966 14:51:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:36.966 14:51:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.966 14:51:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:36.966 14:51:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.877 14:51:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:38.877 00:14:38.877 real 0m23.174s 00:14:38.877 user 1m3.143s 00:14:38.877 sys 0m7.795s 00:14:38.877 14:51:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:38.877 14:51:21 -- common/autotest_common.sh@10 -- # set +x 00:14:38.877 ************************************ 00:14:38.877 END TEST nvmf_lvol 00:14:38.877 ************************************ 00:14:39.138 14:51:21 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:39.138 14:51:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:39.138 14:51:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:39.138 14:51:21 -- common/autotest_common.sh@10 -- # set +x 00:14:39.138 ************************************ 00:14:39.138 START TEST nvmf_lvs_grow 00:14:39.138 ************************************ 00:14:39.138 14:51:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:39.138 * Looking for test storage... 00:14:39.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:39.138 14:51:21 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:39.138 14:51:21 -- nvmf/common.sh@7 -- # uname -s 00:14:39.398 14:51:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:39.398 14:51:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:39.398 14:51:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:39.398 14:51:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:39.398 14:51:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:39.398 14:51:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:39.398 14:51:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:39.398 14:51:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:39.398 14:51:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:39.398 14:51:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:39.398 14:51:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:39.398 14:51:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:39.398 14:51:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:39.398 14:51:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:39.398 14:51:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:39.398 14:51:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:39.398 14:51:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:39.398 14:51:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:39.398 14:51:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:39.398 14:51:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:39.398 14:51:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.398 14:51:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.398 14:51:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.398 14:51:21 -- paths/export.sh@5 -- # export PATH 00:14:39.398 14:51:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.398 14:51:21 -- nvmf/common.sh@47 -- # : 0 00:14:39.398 14:51:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:39.398 14:51:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:39.398 14:51:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:39.398 14:51:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:39.398 14:51:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:39.398 14:51:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:39.398 14:51:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:39.398 14:51:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:39.398 14:51:21 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:39.398 14:51:21 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:39.398 14:51:21 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:39.398 14:51:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:39.398 14:51:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:39.399 14:51:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:39.399 14:51:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:39.399 14:51:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:39.399 14:51:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.399 14:51:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:39.399 14:51:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.399 14:51:21 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:39.399 14:51:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:39.399 14:51:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:39.399 14:51:21 -- common/autotest_common.sh@10 -- # set +x 00:14:47.617 14:51:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:47.617 14:51:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:47.617 14:51:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:47.617 14:51:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:47.617 14:51:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:47.617 14:51:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:47.617 14:51:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:47.617 14:51:28 -- nvmf/common.sh@295 -- # net_devs=() 00:14:47.617 14:51:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:47.617 14:51:28 -- nvmf/common.sh@296 -- # e810=() 00:14:47.617 14:51:28 -- nvmf/common.sh@296 -- # local -ga e810 00:14:47.617 14:51:28 -- nvmf/common.sh@297 -- # x722=() 00:14:47.617 14:51:28 -- nvmf/common.sh@297 -- # local -ga x722 00:14:47.617 14:51:28 -- nvmf/common.sh@298 -- # mlx=() 00:14:47.617 14:51:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:47.617 14:51:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:47.617 14:51:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:47.617 14:51:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:47.617 14:51:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:47.617 14:51:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:47.617 14:51:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:47.617 14:51:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:47.617 14:51:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:47.617 14:51:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:47.617 14:51:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:47.617 14:51:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:47.617 14:51:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:47.617 14:51:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:47.617 14:51:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:47.617 14:51:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:47.617 14:51:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:47.617 14:51:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:47.617 14:51:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:47.617 14:51:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:47.617 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:47.617 14:51:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:47.617 14:51:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:47.617 14:51:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.617 14:51:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.617 14:51:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:47.617 14:51:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:47.617 14:51:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:47.617 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:47.617 14:51:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:47.617 14:51:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:47.617 14:51:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.617 14:51:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.617 14:51:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:47.617 14:51:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:47.617 14:51:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:47.617 14:51:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:47.617 14:51:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:47.617 14:51:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.617 14:51:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:47.617 14:51:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.617 14:51:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:47.617 Found net devices under 0000:31:00.0: cvl_0_0 00:14:47.617 14:51:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.617 14:51:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:47.617 14:51:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.617 14:51:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:47.617 14:51:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.617 14:51:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:47.617 Found net devices under 0000:31:00.1: cvl_0_1 00:14:47.617 14:51:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.617 14:51:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:47.617 14:51:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:47.617 14:51:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:47.617 14:51:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:47.617 14:51:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:47.617 14:51:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:47.617 14:51:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:47.617 14:51:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:47.617 14:51:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:47.617 14:51:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:47.617 14:51:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:47.617 14:51:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:47.617 14:51:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:47.617 14:51:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:47.617 14:51:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:47.617 14:51:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:47.617 14:51:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:47.617 14:51:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:47.617 14:51:28 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:47.617 14:51:28 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:47.617 14:51:28 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:47.617 14:51:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:47.617 14:51:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:47.617 14:51:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:47.617 14:51:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:47.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:47.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.521 ms 00:14:47.617 00:14:47.618 --- 10.0.0.2 ping statistics --- 00:14:47.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.618 rtt min/avg/max/mdev = 0.521/0.521/0.521/0.000 ms 00:14:47.618 14:51:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:47.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:47.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:14:47.618 00:14:47.618 --- 10.0.0.1 ping statistics --- 00:14:47.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.618 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:14:47.618 14:51:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:47.618 14:51:29 -- nvmf/common.sh@411 -- # return 0 00:14:47.618 14:51:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:47.618 14:51:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:47.618 14:51:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:47.618 14:51:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:47.618 14:51:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:47.618 14:51:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:47.618 14:51:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:47.618 14:51:29 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:47.618 14:51:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:47.618 14:51:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:47.618 14:51:29 -- common/autotest_common.sh@10 -- # set +x 00:14:47.618 14:51:29 -- nvmf/common.sh@470 -- # nvmfpid=1018680 00:14:47.618 14:51:29 -- nvmf/common.sh@471 -- # waitforlisten 1018680 00:14:47.618 14:51:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:47.618 14:51:29 -- common/autotest_common.sh@817 -- # '[' -z 1018680 ']' 00:14:47.618 14:51:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.618 14:51:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:47.618 14:51:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.618 14:51:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:47.618 14:51:29 -- common/autotest_common.sh@10 -- # set +x 00:14:47.618 [2024-04-26 14:51:29.184367] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:47.618 [2024-04-26 14:51:29.184461] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.618 EAL: No free 2048 kB hugepages reported on node 1 00:14:47.618 [2024-04-26 14:51:29.262227] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.618 [2024-04-26 14:51:29.336876] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.618 [2024-04-26 14:51:29.336916] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.618 [2024-04-26 14:51:29.336924] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:47.618 [2024-04-26 14:51:29.336931] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:47.618 [2024-04-26 14:51:29.336937] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.618 [2024-04-26 14:51:29.336957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.618 14:51:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:47.618 14:51:29 -- common/autotest_common.sh@850 -- # return 0 00:14:47.618 14:51:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:47.618 14:51:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:47.618 14:51:29 -- common/autotest_common.sh@10 -- # set +x 00:14:47.618 14:51:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.618 14:51:29 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:47.618 [2024-04-26 14:51:30.132380] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.618 14:51:30 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:47.618 14:51:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:47.618 14:51:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:47.618 14:51:30 -- common/autotest_common.sh@10 -- # set +x 00:14:47.879 ************************************ 00:14:47.879 START TEST lvs_grow_clean 00:14:47.879 ************************************ 00:14:47.879 14:51:30 -- common/autotest_common.sh@1111 -- # lvs_grow 00:14:47.879 14:51:30 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:47.879 14:51:30 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:47.879 14:51:30 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:47.879 14:51:30 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:47.879 14:51:30 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:47.879 14:51:30 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:47.879 14:51:30 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:47.879 14:51:30 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:47.879 14:51:30 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:47.879 14:51:30 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:47.879 14:51:30 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:48.140 14:51:30 -- target/nvmf_lvs_grow.sh@28 -- # lvs=fea5caf7-eeda-4601-8895-2ee2b9d2c863 00:14:48.140 14:51:30 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fea5caf7-eeda-4601-8895-2ee2b9d2c863 00:14:48.140 14:51:30 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:48.401 14:51:30 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:48.401 14:51:30 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:48.401 14:51:30 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fea5caf7-eeda-4601-8895-2ee2b9d2c863 lvol 150 00:14:48.401 14:51:30 -- target/nvmf_lvs_grow.sh@33 -- # lvol=36b50292-cbe3-4049-8f8c-ef10929c40ef 00:14:48.401 14:51:30 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:48.401 14:51:30 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:48.661 [2024-04-26 14:51:31.112419] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:48.661 [2024-04-26 14:51:31.112472] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:48.661 true 00:14:48.661 14:51:31 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fea5caf7-eeda-4601-8895-2ee2b9d2c863 00:14:48.661 14:51:31 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:48.661 14:51:31 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:48.661 14:51:31 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:48.923 14:51:31 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 36b50292-cbe3-4049-8f8c-ef10929c40ef 00:14:48.923 14:51:31 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:49.182 [2024-04-26 14:51:31.706244] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.182 14:51:31 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:49.442 14:51:31 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1019385 00:14:49.442 14:51:31 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:49.442 14:51:31 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:49.442 14:51:31 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1019385 /var/tmp/bdevperf.sock 00:14:49.442 14:51:31 -- common/autotest_common.sh@817 -- # '[' -z 1019385 ']' 00:14:49.442 14:51:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:49.442 14:51:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:49.442 14:51:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:49.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:49.442 14:51:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:49.442 14:51:31 -- common/autotest_common.sh@10 -- # set +x 00:14:49.442 [2024-04-26 14:51:31.918394] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:49.442 [2024-04-26 14:51:31.918443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019385 ] 00:14:49.442 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.442 [2024-04-26 14:51:31.993690] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.442 [2024-04-26 14:51:32.056210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.011 14:51:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:50.270 14:51:32 -- common/autotest_common.sh@850 -- # return 0 00:14:50.270 14:51:32 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:50.530 Nvme0n1 00:14:50.530 14:51:33 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:50.799 [ 00:14:50.799 { 00:14:50.799 "name": "Nvme0n1", 00:14:50.799 "aliases": [ 00:14:50.799 "36b50292-cbe3-4049-8f8c-ef10929c40ef" 00:14:50.799 ], 00:14:50.799 "product_name": "NVMe disk", 00:14:50.799 "block_size": 4096, 00:14:50.799 "num_blocks": 38912, 00:14:50.799 "uuid": "36b50292-cbe3-4049-8f8c-ef10929c40ef", 00:14:50.799 "assigned_rate_limits": { 00:14:50.799 "rw_ios_per_sec": 0, 00:14:50.799 "rw_mbytes_per_sec": 0, 00:14:50.799 "r_mbytes_per_sec": 0, 00:14:50.799 "w_mbytes_per_sec": 0 00:14:50.799 }, 00:14:50.799 "claimed": false, 00:14:50.799 "zoned": false, 00:14:50.799 "supported_io_types": { 00:14:50.799 "read": true, 00:14:50.799 "write": true, 00:14:50.799 "unmap": true, 00:14:50.799 "write_zeroes": true, 00:14:50.799 "flush": true, 00:14:50.799 "reset": true, 00:14:50.799 "compare": true, 00:14:50.799 "compare_and_write": true, 00:14:50.799 "abort": true, 00:14:50.799 "nvme_admin": true, 00:14:50.799 "nvme_io": true 00:14:50.799 }, 00:14:50.799 "memory_domains": [ 00:14:50.799 { 00:14:50.799 "dma_device_id": "system", 00:14:50.799 "dma_device_type": 1 00:14:50.799 } 00:14:50.799 ], 00:14:50.799 "driver_specific": { 00:14:50.799 "nvme": [ 00:14:50.799 { 00:14:50.799 "trid": { 00:14:50.799 "trtype": "TCP", 00:14:50.799 "adrfam": "IPv4", 00:14:50.799 "traddr": "10.0.0.2", 00:14:50.799 "trsvcid": "4420", 00:14:50.799 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:50.799 }, 00:14:50.799 "ctrlr_data": { 00:14:50.799 "cntlid": 1, 00:14:50.799 "vendor_id": "0x8086", 00:14:50.799 "model_number": "SPDK bdev Controller", 00:14:50.799 "serial_number": "SPDK0", 00:14:50.799 "firmware_revision": "24.05", 00:14:50.799 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:50.799 "oacs": { 00:14:50.799 "security": 0, 00:14:50.799 "format": 0, 00:14:50.799 "firmware": 0, 00:14:50.799 "ns_manage": 0 00:14:50.799 }, 00:14:50.799 "multi_ctrlr": true, 00:14:50.799 "ana_reporting": false 00:14:50.799 }, 00:14:50.799 "vs": { 00:14:50.799 "nvme_version": "1.3" 00:14:50.799 }, 00:14:50.799 "ns_data": { 00:14:50.799 "id": 1, 00:14:50.799 "can_share": true 00:14:50.799 } 00:14:50.799 } 00:14:50.799 ], 00:14:50.799 "mp_policy": "active_passive" 00:14:50.799 } 00:14:50.799 } 00:14:50.799 ] 00:14:50.799 14:51:33 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1019695 00:14:50.799 14:51:33 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:50.799 14:51:33 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:50.799 Running I/O for 10 seconds... 00:14:51.737 Latency(us) 00:14:51.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:51.737 Nvme0n1 : 1.00 17538.00 68.51 0.00 0.00 0.00 0.00 0.00 00:14:51.737 =================================================================================================================== 00:14:51.737 Total : 17538.00 68.51 0.00 0.00 0.00 0.00 0.00 00:14:51.737 00:14:52.675 14:51:35 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fea5caf7-eeda-4601-8895-2ee2b9d2c863 00:14:52.675 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.675 Nvme0n1 : 2.00 17662.50 68.99 0.00 0.00 0.00 0.00 0.00 00:14:52.675 =================================================================================================================== 00:14:52.675 Total : 17662.50 68.99 0.00 0.00 0.00 0.00 0.00 00:14:52.675 00:14:52.935 true 00:14:52.935 14:51:35 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fea5caf7-eeda-4601-8895-2ee2b9d2c863 00:14:52.935 14:51:35 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:52.935 14:51:35 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:52.935 14:51:35 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:52.935 14:51:35 -- target/nvmf_lvs_grow.sh@65 -- # wait 1019695 00:14:53.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.874 Nvme0n1 : 3.00 17701.67 69.15 0.00 0.00 0.00 0.00 0.00 00:14:53.874 =================================================================================================================== 00:14:53.874 Total : 17701.67 69.15 0.00 0.00 0.00 0.00 0.00 00:14:53.874 00:14:54.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.814 Nvme0n1 : 4.00 17720.50 69.22 0.00 0.00 0.00 0.00 0.00 00:14:54.814 =================================================================================================================== 00:14:54.814 Total : 17720.50 69.22 0.00 0.00 0.00 0.00 0.00 00:14:54.814 00:14:55.753 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:55.753 Nvme0n1 : 5.00 17755.40 69.36 0.00 0.00 0.00 0.00 0.00 00:14:55.753 =================================================================================================================== 00:14:55.753 Total : 17755.40 69.36 0.00 0.00 0.00 0.00 0.00 00:14:55.753 00:14:56.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.691 Nvme0n1 : 6.00 17778.67 69.45 0.00 0.00 0.00 0.00 0.00 00:14:56.691 =================================================================================================================== 00:14:56.691 Total : 17778.67 69.45 0.00 0.00 0.00 0.00 0.00 00:14:56.691 00:14:58.073 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:58.073 Nvme0n1 : 7.00 17786.86 69.48 0.00 0.00 0.00 0.00 0.00 00:14:58.073 =================================================================================================================== 00:14:58.073 Total : 17786.86 69.48 0.00 0.00 0.00 0.00 0.00 00:14:58.073 00:14:59.012 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.012 Nvme0n1 : 8.00 17793.25 69.50 0.00 0.00 0.00 0.00 0.00 00:14:59.012 =================================================================================================================== 00:14:59.012 Total : 17793.25 69.50 0.00 0.00 0.00 0.00 0.00 00:14:59.012 00:14:59.952 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.952 Nvme0n1 : 9.00 17803.56 69.55 0.00 0.00 0.00 0.00 0.00 00:14:59.952 =================================================================================================================== 00:14:59.952 Total : 17803.56 69.55 0.00 0.00 0.00 0.00 0.00 00:14:59.952 00:15:00.893 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.893 Nvme0n1 : 10.00 17818.20 69.60 0.00 0.00 0.00 0.00 0.00 00:15:00.893 =================================================================================================================== 00:15:00.893 Total : 17818.20 69.60 0.00 0.00 0.00 0.00 0.00 00:15:00.893 00:15:00.893 00:15:00.893 Latency(us) 00:15:00.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.893 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.893 Nvme0n1 : 10.00 17816.99 69.60 0.00 0.00 7180.73 4205.23 13489.49 00:15:00.893 =================================================================================================================== 00:15:00.893 Total : 17816.99 69.60 0.00 0.00 7180.73 4205.23 13489.49 00:15:00.893 0 00:15:00.893 14:51:43 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1019385 00:15:00.893 14:51:43 -- common/autotest_common.sh@936 -- # '[' -z 1019385 ']' 00:15:00.893 14:51:43 -- common/autotest_common.sh@940 -- # kill -0 1019385 00:15:00.893 14:51:43 -- common/autotest_common.sh@941 -- # uname 00:15:00.893 14:51:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:00.893 14:51:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1019385 00:15:00.893 14:51:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:00.893 14:51:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:00.893 14:51:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1019385' 00:15:00.893 killing process with pid 1019385 00:15:00.893 14:51:43 -- common/autotest_common.sh@955 -- # kill 1019385 00:15:00.893 Received shutdown signal, test time was about 10.000000 seconds 00:15:00.893 00:15:00.893 Latency(us) 00:15:00.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.893 =================================================================================================================== 00:15:00.893 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:00.893 14:51:43 -- common/autotest_common.sh@960 -- # wait 1019385 00:15:00.893 14:51:43 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:01.152 14:51:43 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fea5caf7-eeda-4601-8895-2ee2b9d2c863 00:15:01.152 14:51:43 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:01.413 14:51:43 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:01.413 14:51:43 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:15:01.413 14:51:43 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:01.413 [2024-04-26 14:51:44.042363] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:01.413 14:51:44 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fea5caf7-eeda-4601-8895-2ee2b9d2c863 00:15:01.413 14:51:44 -- common/autotest_common.sh@638 -- # local es=0 00:15:01.413 14:51:44 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fea5caf7-eeda-4601-8895-2ee2b9d2c863 00:15:01.413 14:51:44 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:01.413 14:51:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:01.413 14:51:44 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:01.413 14:51:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:01.413 14:51:44 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:01.413 14:51:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:01.413 14:51:44 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:01.413 14:51:44 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:01.413 14:51:44 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fea5caf7-eeda-4601-8895-2ee2b9d2c863 00:15:01.672 request: 00:15:01.672 { 00:15:01.672 "uuid": "fea5caf7-eeda-4601-8895-2ee2b9d2c863", 00:15:01.672 "method": "bdev_lvol_get_lvstores", 00:15:01.672 "req_id": 1 00:15:01.672 } 00:15:01.672 Got JSON-RPC error response 00:15:01.672 response: 00:15:01.672 { 00:15:01.672 "code": -19, 00:15:01.672 "message": "No such device" 00:15:01.672 } 00:15:01.672 14:51:44 -- common/autotest_common.sh@641 -- # es=1 00:15:01.672 14:51:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:01.672 14:51:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:01.672 14:51:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:01.672 14:51:44 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:01.932 aio_bdev 00:15:01.932 14:51:44 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 36b50292-cbe3-4049-8f8c-ef10929c40ef 00:15:01.932 14:51:44 -- common/autotest_common.sh@885 -- # local bdev_name=36b50292-cbe3-4049-8f8c-ef10929c40ef 00:15:01.932 14:51:44 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:01.932 14:51:44 -- common/autotest_common.sh@887 -- # local i 00:15:01.932 14:51:44 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:01.932 14:51:44 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:01.932 14:51:44 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:01.932 14:51:44 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 36b50292-cbe3-4049-8f8c-ef10929c40ef -t 2000 00:15:02.192 [ 00:15:02.192 { 00:15:02.192 "name": "36b50292-cbe3-4049-8f8c-ef10929c40ef", 00:15:02.192 "aliases": [ 00:15:02.192 "lvs/lvol" 00:15:02.192 ], 00:15:02.192 "product_name": "Logical Volume", 00:15:02.192 "block_size": 4096, 00:15:02.192 "num_blocks": 38912, 00:15:02.192 "uuid": "36b50292-cbe3-4049-8f8c-ef10929c40ef", 00:15:02.192 "assigned_rate_limits": { 00:15:02.192 "rw_ios_per_sec": 0, 00:15:02.192 "rw_mbytes_per_sec": 0, 00:15:02.192 "r_mbytes_per_sec": 0, 00:15:02.192 "w_mbytes_per_sec": 0 00:15:02.192 }, 00:15:02.192 "claimed": false, 00:15:02.192 "zoned": false, 00:15:02.192 "supported_io_types": { 00:15:02.192 "read": true, 00:15:02.192 "write": true, 00:15:02.192 "unmap": true, 00:15:02.192 "write_zeroes": true, 00:15:02.192 "flush": false, 00:15:02.192 "reset": true, 00:15:02.192 "compare": false, 00:15:02.192 "compare_and_write": false, 00:15:02.192 "abort": false, 00:15:02.192 "nvme_admin": false, 00:15:02.192 "nvme_io": false 00:15:02.192 }, 00:15:02.192 "driver_specific": { 00:15:02.192 "lvol": { 00:15:02.192 "lvol_store_uuid": "fea5caf7-eeda-4601-8895-2ee2b9d2c863", 00:15:02.192 "base_bdev": "aio_bdev", 00:15:02.192 "thin_provision": false, 00:15:02.192 "snapshot": false, 00:15:02.192 "clone": false, 00:15:02.192 "esnap_clone": false 00:15:02.192 } 00:15:02.192 } 00:15:02.192 } 00:15:02.192 ] 00:15:02.192 14:51:44 -- common/autotest_common.sh@893 -- # return 0 00:15:02.192 14:51:44 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fea5caf7-eeda-4601-8895-2ee2b9d2c863 00:15:02.192 14:51:44 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:02.192 14:51:44 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:02.192 14:51:44 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fea5caf7-eeda-4601-8895-2ee2b9d2c863 00:15:02.192 14:51:44 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:02.451 14:51:44 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:02.451 14:51:44 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 36b50292-cbe3-4049-8f8c-ef10929c40ef 00:15:02.711 14:51:45 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fea5caf7-eeda-4601-8895-2ee2b9d2c863 00:15:02.711 14:51:45 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:02.970 14:51:45 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:02.970 00:15:02.970 real 0m15.219s 00:15:02.970 user 0m14.966s 00:15:02.970 sys 0m1.242s 00:15:02.970 14:51:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:02.970 14:51:45 -- common/autotest_common.sh@10 -- # set +x 00:15:02.970 ************************************ 00:15:02.970 END TEST lvs_grow_clean 00:15:02.970 ************************************ 00:15:02.970 14:51:45 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:02.970 14:51:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:02.970 14:51:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:02.970 14:51:45 -- common/autotest_common.sh@10 -- # set +x 00:15:03.229 ************************************ 00:15:03.230 START TEST lvs_grow_dirty 00:15:03.230 ************************************ 00:15:03.230 14:51:45 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:15:03.230 14:51:45 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:03.230 14:51:45 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:03.230 14:51:45 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:03.230 14:51:45 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:03.230 14:51:45 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:03.230 14:51:45 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:03.230 14:51:45 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:03.230 14:51:45 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:03.230 14:51:45 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:03.489 14:51:45 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:03.489 14:51:45 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:03.489 14:51:46 -- target/nvmf_lvs_grow.sh@28 -- # lvs=b978ae8d-e164-4d66-b181-9e3bc73f5c16 00:15:03.489 14:51:46 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b978ae8d-e164-4d66-b181-9e3bc73f5c16 00:15:03.489 14:51:46 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:03.748 14:51:46 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:03.748 14:51:46 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:03.748 14:51:46 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b978ae8d-e164-4d66-b181-9e3bc73f5c16 lvol 150 00:15:03.748 14:51:46 -- target/nvmf_lvs_grow.sh@33 -- # lvol=443e3b66-bcab-4500-a2c4-f63ca1407332 00:15:03.748 14:51:46 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:03.748 14:51:46 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:04.008 [2024-04-26 14:51:46.530277] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:04.008 [2024-04-26 14:51:46.530327] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:04.008 true 00:15:04.008 14:51:46 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:04.008 14:51:46 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b978ae8d-e164-4d66-b181-9e3bc73f5c16 00:15:04.267 14:51:46 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:04.267 14:51:46 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:04.267 14:51:46 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 443e3b66-bcab-4500-a2c4-f63ca1407332 00:15:04.527 14:51:46 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:04.527 14:51:47 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:04.786 14:51:47 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1022478 00:15:04.786 14:51:47 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:04.786 14:51:47 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1022478 /var/tmp/bdevperf.sock 00:15:04.786 14:51:47 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:04.786 14:51:47 -- common/autotest_common.sh@817 -- # '[' -z 1022478 ']' 00:15:04.786 14:51:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:04.786 14:51:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:04.786 14:51:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:04.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:04.786 14:51:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:04.786 14:51:47 -- common/autotest_common.sh@10 -- # set +x 00:15:04.786 [2024-04-26 14:51:47.362982] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:04.787 [2024-04-26 14:51:47.363032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1022478 ] 00:15:04.787 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.787 [2024-04-26 14:51:47.439148] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.046 [2024-04-26 14:51:47.501229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.617 14:51:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:05.617 14:51:48 -- common/autotest_common.sh@850 -- # return 0 00:15:05.617 14:51:48 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:05.878 Nvme0n1 00:15:05.878 14:51:48 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:06.139 [ 00:15:06.139 { 00:15:06.139 "name": "Nvme0n1", 00:15:06.139 "aliases": [ 00:15:06.139 "443e3b66-bcab-4500-a2c4-f63ca1407332" 00:15:06.139 ], 00:15:06.139 "product_name": "NVMe disk", 00:15:06.139 "block_size": 4096, 00:15:06.139 "num_blocks": 38912, 00:15:06.139 "uuid": "443e3b66-bcab-4500-a2c4-f63ca1407332", 00:15:06.139 "assigned_rate_limits": { 00:15:06.139 "rw_ios_per_sec": 0, 00:15:06.139 "rw_mbytes_per_sec": 0, 00:15:06.139 "r_mbytes_per_sec": 0, 00:15:06.139 "w_mbytes_per_sec": 0 00:15:06.139 }, 00:15:06.139 "claimed": false, 00:15:06.139 "zoned": false, 00:15:06.139 "supported_io_types": { 00:15:06.139 "read": true, 00:15:06.139 "write": true, 00:15:06.139 "unmap": true, 00:15:06.139 "write_zeroes": true, 00:15:06.139 "flush": true, 00:15:06.139 "reset": true, 00:15:06.139 "compare": true, 00:15:06.139 "compare_and_write": true, 00:15:06.139 "abort": true, 00:15:06.139 "nvme_admin": true, 00:15:06.139 "nvme_io": true 00:15:06.139 }, 00:15:06.139 "memory_domains": [ 00:15:06.139 { 00:15:06.139 "dma_device_id": "system", 00:15:06.139 "dma_device_type": 1 00:15:06.139 } 00:15:06.139 ], 00:15:06.139 "driver_specific": { 00:15:06.139 "nvme": [ 00:15:06.139 { 00:15:06.139 "trid": { 00:15:06.139 "trtype": "TCP", 00:15:06.139 "adrfam": "IPv4", 00:15:06.139 "traddr": "10.0.0.2", 00:15:06.139 "trsvcid": "4420", 00:15:06.139 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:06.139 }, 00:15:06.139 "ctrlr_data": { 00:15:06.139 "cntlid": 1, 00:15:06.139 "vendor_id": "0x8086", 00:15:06.139 "model_number": "SPDK bdev Controller", 00:15:06.139 "serial_number": "SPDK0", 00:15:06.139 "firmware_revision": "24.05", 00:15:06.139 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:06.139 "oacs": { 00:15:06.139 "security": 0, 00:15:06.139 "format": 0, 00:15:06.139 "firmware": 0, 00:15:06.139 "ns_manage": 0 00:15:06.139 }, 00:15:06.139 "multi_ctrlr": true, 00:15:06.139 "ana_reporting": false 00:15:06.139 }, 00:15:06.139 "vs": { 00:15:06.139 "nvme_version": "1.3" 00:15:06.139 }, 00:15:06.139 "ns_data": { 00:15:06.139 "id": 1, 00:15:06.139 "can_share": true 00:15:06.139 } 00:15:06.139 } 00:15:06.139 ], 00:15:06.139 "mp_policy": "active_passive" 00:15:06.139 } 00:15:06.139 } 00:15:06.139 ] 00:15:06.139 14:51:48 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:06.139 14:51:48 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1022718 00:15:06.139 14:51:48 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:06.139 Running I/O for 10 seconds... 00:15:07.079 Latency(us) 00:15:07.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.079 Nvme0n1 : 1.00 17537.00 68.50 0.00 0.00 0.00 0.00 0.00 00:15:07.079 =================================================================================================================== 00:15:07.079 Total : 17537.00 68.50 0.00 0.00 0.00 0.00 0.00 00:15:07.079 00:15:08.016 14:51:50 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b978ae8d-e164-4d66-b181-9e3bc73f5c16 00:15:08.276 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:08.276 Nvme0n1 : 2.00 17649.50 68.94 0.00 0.00 0.00 0.00 0.00 00:15:08.276 =================================================================================================================== 00:15:08.276 Total : 17649.50 68.94 0.00 0.00 0.00 0.00 0.00 00:15:08.276 00:15:08.276 true 00:15:08.276 14:51:50 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b978ae8d-e164-4d66-b181-9e3bc73f5c16 00:15:08.276 14:51:50 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:08.535 14:51:50 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:08.535 14:51:50 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:08.535 14:51:50 -- target/nvmf_lvs_grow.sh@65 -- # wait 1022718 00:15:09.105 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.105 Nvme0n1 : 3.00 17694.00 69.12 0.00 0.00 0.00 0.00 0.00 00:15:09.105 =================================================================================================================== 00:15:09.105 Total : 17694.00 69.12 0.00 0.00 0.00 0.00 0.00 00:15:09.105 00:15:10.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:10.492 Nvme0n1 : 4.00 17718.50 69.21 0.00 0.00 0.00 0.00 0.00 00:15:10.492 =================================================================================================================== 00:15:10.492 Total : 17718.50 69.21 0.00 0.00 0.00 0.00 0.00 00:15:10.492 00:15:11.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:11.112 Nvme0n1 : 5.00 17745.20 69.32 0.00 0.00 0.00 0.00 0.00 00:15:11.112 =================================================================================================================== 00:15:11.112 Total : 17745.20 69.32 0.00 0.00 0.00 0.00 0.00 00:15:11.112 00:15:12.496 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:12.496 Nvme0n1 : 6.00 17752.00 69.34 0.00 0.00 0.00 0.00 0.00 00:15:12.496 =================================================================================================================== 00:15:12.496 Total : 17752.00 69.34 0.00 0.00 0.00 0.00 0.00 00:15:12.496 00:15:13.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:13.438 Nvme0n1 : 7.00 17773.71 69.43 0.00 0.00 0.00 0.00 0.00 00:15:13.438 =================================================================================================================== 00:15:13.438 Total : 17773.71 69.43 0.00 0.00 0.00 0.00 0.00 00:15:13.438 00:15:14.380 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:14.380 Nvme0n1 : 8.00 17791.62 69.50 0.00 0.00 0.00 0.00 0.00 00:15:14.380 =================================================================================================================== 00:15:14.380 Total : 17791.62 69.50 0.00 0.00 0.00 0.00 0.00 00:15:14.380 00:15:15.322 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:15.322 Nvme0n1 : 9.00 17797.67 69.52 0.00 0.00 0.00 0.00 0.00 00:15:15.322 =================================================================================================================== 00:15:15.322 Total : 17797.67 69.52 0.00 0.00 0.00 0.00 0.00 00:15:15.322 00:15:16.263 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:16.263 Nvme0n1 : 10.00 17808.60 69.56 0.00 0.00 0.00 0.00 0.00 00:15:16.263 =================================================================================================================== 00:15:16.263 Total : 17808.60 69.56 0.00 0.00 0.00 0.00 0.00 00:15:16.263 00:15:16.263 00:15:16.263 Latency(us) 00:15:16.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.263 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:16.263 Nvme0n1 : 10.01 17809.42 69.57 0.00 0.00 7183.70 4396.37 14527.15 00:15:16.263 =================================================================================================================== 00:15:16.263 Total : 17809.42 69.57 0.00 0.00 7183.70 4396.37 14527.15 00:15:16.263 0 00:15:16.263 14:51:58 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1022478 00:15:16.263 14:51:58 -- common/autotest_common.sh@936 -- # '[' -z 1022478 ']' 00:15:16.263 14:51:58 -- common/autotest_common.sh@940 -- # kill -0 1022478 00:15:16.263 14:51:58 -- common/autotest_common.sh@941 -- # uname 00:15:16.263 14:51:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:16.263 14:51:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1022478 00:15:16.263 14:51:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:16.263 14:51:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:16.263 14:51:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1022478' 00:15:16.263 killing process with pid 1022478 00:15:16.263 14:51:58 -- common/autotest_common.sh@955 -- # kill 1022478 00:15:16.263 Received shutdown signal, test time was about 10.000000 seconds 00:15:16.263 00:15:16.263 Latency(us) 00:15:16.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.263 =================================================================================================================== 00:15:16.263 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:16.263 14:51:58 -- common/autotest_common.sh@960 -- # wait 1022478 00:15:16.523 14:51:58 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:16.523 14:51:59 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b978ae8d-e164-4d66-b181-9e3bc73f5c16 00:15:16.523 14:51:59 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:16.784 14:51:59 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:16.784 14:51:59 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:15:16.784 14:51:59 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1018680 00:15:16.784 14:51:59 -- target/nvmf_lvs_grow.sh@74 -- # wait 1018680 00:15:16.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1018680 Killed "${NVMF_APP[@]}" "$@" 00:15:16.784 14:51:59 -- target/nvmf_lvs_grow.sh@74 -- # true 00:15:16.784 14:51:59 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:15:16.784 14:51:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:16.784 14:51:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:16.784 14:51:59 -- common/autotest_common.sh@10 -- # set +x 00:15:16.784 14:51:59 -- nvmf/common.sh@470 -- # nvmfpid=1024834 00:15:16.784 14:51:59 -- nvmf/common.sh@471 -- # waitforlisten 1024834 00:15:16.784 14:51:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:16.784 14:51:59 -- common/autotest_common.sh@817 -- # '[' -z 1024834 ']' 00:15:16.784 14:51:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.784 14:51:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:16.784 14:51:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.784 14:51:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:16.784 14:51:59 -- common/autotest_common.sh@10 -- # set +x 00:15:16.784 [2024-04-26 14:51:59.384164] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:16.784 [2024-04-26 14:51:59.384220] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.784 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.045 [2024-04-26 14:51:59.450859] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.045 [2024-04-26 14:51:59.515005] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.045 [2024-04-26 14:51:59.515043] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.045 [2024-04-26 14:51:59.515051] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.045 [2024-04-26 14:51:59.515057] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.045 [2024-04-26 14:51:59.515062] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.045 [2024-04-26 14:51:59.515080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.617 14:52:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:17.617 14:52:00 -- common/autotest_common.sh@850 -- # return 0 00:15:17.617 14:52:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:17.617 14:52:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:17.617 14:52:00 -- common/autotest_common.sh@10 -- # set +x 00:15:17.617 14:52:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.617 14:52:00 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:17.878 [2024-04-26 14:52:00.327904] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:17.878 [2024-04-26 14:52:00.327992] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:17.878 [2024-04-26 14:52:00.328020] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:17.878 14:52:00 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:15:17.878 14:52:00 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 443e3b66-bcab-4500-a2c4-f63ca1407332 00:15:17.878 14:52:00 -- common/autotest_common.sh@885 -- # local bdev_name=443e3b66-bcab-4500-a2c4-f63ca1407332 00:15:17.878 14:52:00 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:17.879 14:52:00 -- common/autotest_common.sh@887 -- # local i 00:15:17.879 14:52:00 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:17.879 14:52:00 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:17.879 14:52:00 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:17.879 14:52:00 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 443e3b66-bcab-4500-a2c4-f63ca1407332 -t 2000 00:15:18.139 [ 00:15:18.139 { 00:15:18.139 "name": "443e3b66-bcab-4500-a2c4-f63ca1407332", 00:15:18.139 "aliases": [ 00:15:18.139 "lvs/lvol" 00:15:18.139 ], 00:15:18.139 "product_name": "Logical Volume", 00:15:18.139 "block_size": 4096, 00:15:18.139 "num_blocks": 38912, 00:15:18.139 "uuid": "443e3b66-bcab-4500-a2c4-f63ca1407332", 00:15:18.139 "assigned_rate_limits": { 00:15:18.139 "rw_ios_per_sec": 0, 00:15:18.139 "rw_mbytes_per_sec": 0, 00:15:18.139 "r_mbytes_per_sec": 0, 00:15:18.139 "w_mbytes_per_sec": 0 00:15:18.139 }, 00:15:18.139 "claimed": false, 00:15:18.139 "zoned": false, 00:15:18.139 "supported_io_types": { 00:15:18.139 "read": true, 00:15:18.139 "write": true, 00:15:18.139 "unmap": true, 00:15:18.139 "write_zeroes": true, 00:15:18.139 "flush": false, 00:15:18.139 "reset": true, 00:15:18.139 "compare": false, 00:15:18.139 "compare_and_write": false, 00:15:18.139 "abort": false, 00:15:18.139 "nvme_admin": false, 00:15:18.139 "nvme_io": false 00:15:18.139 }, 00:15:18.139 "driver_specific": { 00:15:18.139 "lvol": { 00:15:18.139 "lvol_store_uuid": "b978ae8d-e164-4d66-b181-9e3bc73f5c16", 00:15:18.139 "base_bdev": "aio_bdev", 00:15:18.139 "thin_provision": false, 00:15:18.139 "snapshot": false, 00:15:18.139 "clone": false, 00:15:18.139 "esnap_clone": false 00:15:18.139 } 00:15:18.139 } 00:15:18.139 } 00:15:18.139 ] 00:15:18.139 14:52:00 -- common/autotest_common.sh@893 -- # return 0 00:15:18.139 14:52:00 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b978ae8d-e164-4d66-b181-9e3bc73f5c16 00:15:18.139 14:52:00 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:15:18.400 14:52:00 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:15:18.400 14:52:00 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b978ae8d-e164-4d66-b181-9e3bc73f5c16 00:15:18.400 14:52:00 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:15:18.400 14:52:00 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:15:18.400 14:52:00 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:18.663 [2024-04-26 14:52:01.115936] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:18.663 14:52:01 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b978ae8d-e164-4d66-b181-9e3bc73f5c16 00:15:18.663 14:52:01 -- common/autotest_common.sh@638 -- # local es=0 00:15:18.663 14:52:01 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b978ae8d-e164-4d66-b181-9e3bc73f5c16 00:15:18.663 14:52:01 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:18.663 14:52:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:18.663 14:52:01 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:18.663 14:52:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:18.663 14:52:01 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:18.663 14:52:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:18.663 14:52:01 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:18.663 14:52:01 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:18.663 14:52:01 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b978ae8d-e164-4d66-b181-9e3bc73f5c16 00:15:18.663 request: 00:15:18.663 { 00:15:18.663 "uuid": "b978ae8d-e164-4d66-b181-9e3bc73f5c16", 00:15:18.663 "method": "bdev_lvol_get_lvstores", 00:15:18.663 "req_id": 1 00:15:18.663 } 00:15:18.663 Got JSON-RPC error response 00:15:18.663 response: 00:15:18.663 { 00:15:18.663 "code": -19, 00:15:18.663 "message": "No such device" 00:15:18.663 } 00:15:18.663 14:52:01 -- common/autotest_common.sh@641 -- # es=1 00:15:18.663 14:52:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:18.663 14:52:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:18.663 14:52:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:18.663 14:52:01 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:18.922 aio_bdev 00:15:18.922 14:52:01 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 443e3b66-bcab-4500-a2c4-f63ca1407332 00:15:18.922 14:52:01 -- common/autotest_common.sh@885 -- # local bdev_name=443e3b66-bcab-4500-a2c4-f63ca1407332 00:15:18.922 14:52:01 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:18.922 14:52:01 -- common/autotest_common.sh@887 -- # local i 00:15:18.922 14:52:01 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:18.922 14:52:01 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:18.922 14:52:01 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:19.182 14:52:01 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 443e3b66-bcab-4500-a2c4-f63ca1407332 -t 2000 00:15:19.182 [ 00:15:19.182 { 00:15:19.182 "name": "443e3b66-bcab-4500-a2c4-f63ca1407332", 00:15:19.182 "aliases": [ 00:15:19.182 "lvs/lvol" 00:15:19.182 ], 00:15:19.182 "product_name": "Logical Volume", 00:15:19.182 "block_size": 4096, 00:15:19.182 "num_blocks": 38912, 00:15:19.182 "uuid": "443e3b66-bcab-4500-a2c4-f63ca1407332", 00:15:19.182 "assigned_rate_limits": { 00:15:19.182 "rw_ios_per_sec": 0, 00:15:19.182 "rw_mbytes_per_sec": 0, 00:15:19.182 "r_mbytes_per_sec": 0, 00:15:19.182 "w_mbytes_per_sec": 0 00:15:19.182 }, 00:15:19.182 "claimed": false, 00:15:19.182 "zoned": false, 00:15:19.182 "supported_io_types": { 00:15:19.182 "read": true, 00:15:19.182 "write": true, 00:15:19.182 "unmap": true, 00:15:19.182 "write_zeroes": true, 00:15:19.182 "flush": false, 00:15:19.182 "reset": true, 00:15:19.182 "compare": false, 00:15:19.182 "compare_and_write": false, 00:15:19.182 "abort": false, 00:15:19.182 "nvme_admin": false, 00:15:19.182 "nvme_io": false 00:15:19.182 }, 00:15:19.182 "driver_specific": { 00:15:19.182 "lvol": { 00:15:19.182 "lvol_store_uuid": "b978ae8d-e164-4d66-b181-9e3bc73f5c16", 00:15:19.182 "base_bdev": "aio_bdev", 00:15:19.182 "thin_provision": false, 00:15:19.182 "snapshot": false, 00:15:19.182 "clone": false, 00:15:19.182 "esnap_clone": false 00:15:19.182 } 00:15:19.182 } 00:15:19.182 } 00:15:19.182 ] 00:15:19.182 14:52:01 -- common/autotest_common.sh@893 -- # return 0 00:15:19.182 14:52:01 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b978ae8d-e164-4d66-b181-9e3bc73f5c16 00:15:19.182 14:52:01 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:19.442 14:52:01 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:19.442 14:52:01 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b978ae8d-e164-4d66-b181-9e3bc73f5c16 00:15:19.442 14:52:01 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:19.442 14:52:02 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:19.442 14:52:02 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 443e3b66-bcab-4500-a2c4-f63ca1407332 00:15:19.702 14:52:02 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b978ae8d-e164-4d66-b181-9e3bc73f5c16 00:15:19.961 14:52:02 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:19.961 14:52:02 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:19.961 00:15:19.961 real 0m16.873s 00:15:19.961 user 0m44.448s 00:15:19.961 sys 0m2.740s 00:15:19.961 14:52:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:19.961 14:52:02 -- common/autotest_common.sh@10 -- # set +x 00:15:19.961 ************************************ 00:15:19.961 END TEST lvs_grow_dirty 00:15:19.961 ************************************ 00:15:20.221 14:52:02 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:20.221 14:52:02 -- common/autotest_common.sh@794 -- # type=--id 00:15:20.221 14:52:02 -- common/autotest_common.sh@795 -- # id=0 00:15:20.221 14:52:02 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:15:20.221 14:52:02 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:20.221 14:52:02 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:15:20.221 14:52:02 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:15:20.221 14:52:02 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:15:20.221 14:52:02 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:20.221 nvmf_trace.0 00:15:20.221 14:52:02 -- common/autotest_common.sh@809 -- # return 0 00:15:20.221 14:52:02 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:20.221 14:52:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:20.221 14:52:02 -- nvmf/common.sh@117 -- # sync 00:15:20.221 14:52:02 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:20.221 14:52:02 -- nvmf/common.sh@120 -- # set +e 00:15:20.221 14:52:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:20.221 14:52:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:20.221 rmmod nvme_tcp 00:15:20.221 rmmod nvme_fabrics 00:15:20.221 rmmod nvme_keyring 00:15:20.221 14:52:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:20.221 14:52:02 -- nvmf/common.sh@124 -- # set -e 00:15:20.221 14:52:02 -- nvmf/common.sh@125 -- # return 0 00:15:20.221 14:52:02 -- nvmf/common.sh@478 -- # '[' -n 1024834 ']' 00:15:20.221 14:52:02 -- nvmf/common.sh@479 -- # killprocess 1024834 00:15:20.221 14:52:02 -- common/autotest_common.sh@936 -- # '[' -z 1024834 ']' 00:15:20.221 14:52:02 -- common/autotest_common.sh@940 -- # kill -0 1024834 00:15:20.221 14:52:02 -- common/autotest_common.sh@941 -- # uname 00:15:20.221 14:52:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:20.221 14:52:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1024834 00:15:20.221 14:52:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:20.221 14:52:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:20.221 14:52:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1024834' 00:15:20.221 killing process with pid 1024834 00:15:20.221 14:52:02 -- common/autotest_common.sh@955 -- # kill 1024834 00:15:20.221 14:52:02 -- common/autotest_common.sh@960 -- # wait 1024834 00:15:20.481 14:52:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:20.481 14:52:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:20.481 14:52:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:20.481 14:52:02 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:20.481 14:52:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:20.481 14:52:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.481 14:52:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.482 14:52:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.393 14:52:05 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:22.393 00:15:22.393 real 0m43.321s 00:15:22.393 user 1m5.371s 00:15:22.393 sys 0m9.993s 00:15:22.393 14:52:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:22.393 14:52:05 -- common/autotest_common.sh@10 -- # set +x 00:15:22.393 ************************************ 00:15:22.393 END TEST nvmf_lvs_grow 00:15:22.393 ************************************ 00:15:22.393 14:52:05 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:22.393 14:52:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:22.393 14:52:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:22.393 14:52:05 -- common/autotest_common.sh@10 -- # set +x 00:15:22.653 ************************************ 00:15:22.653 START TEST nvmf_bdev_io_wait 00:15:22.653 ************************************ 00:15:22.653 14:52:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:22.653 * Looking for test storage... 00:15:22.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:22.653 14:52:05 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:22.653 14:52:05 -- nvmf/common.sh@7 -- # uname -s 00:15:22.653 14:52:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:22.653 14:52:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:22.653 14:52:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:22.653 14:52:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:22.653 14:52:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:22.653 14:52:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:22.653 14:52:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:22.653 14:52:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:22.653 14:52:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:22.653 14:52:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:22.914 14:52:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:22.914 14:52:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:22.914 14:52:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:22.914 14:52:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:22.914 14:52:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:22.914 14:52:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:22.914 14:52:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:22.914 14:52:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:22.914 14:52:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:22.914 14:52:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:22.914 14:52:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.914 14:52:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.914 14:52:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.914 14:52:05 -- paths/export.sh@5 -- # export PATH 00:15:22.914 14:52:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.914 14:52:05 -- nvmf/common.sh@47 -- # : 0 00:15:22.914 14:52:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:22.914 14:52:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:22.914 14:52:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:22.914 14:52:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:22.914 14:52:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:22.914 14:52:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:22.914 14:52:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:22.914 14:52:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:22.914 14:52:05 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:22.914 14:52:05 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:22.914 14:52:05 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:22.914 14:52:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:22.914 14:52:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:22.914 14:52:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:22.914 14:52:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:22.914 14:52:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:22.914 14:52:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:22.914 14:52:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:22.914 14:52:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.914 14:52:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:22.914 14:52:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:22.914 14:52:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:22.914 14:52:05 -- common/autotest_common.sh@10 -- # set +x 00:15:31.052 14:52:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:31.052 14:52:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:31.052 14:52:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:31.052 14:52:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:31.052 14:52:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:31.052 14:52:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:31.052 14:52:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:31.052 14:52:12 -- nvmf/common.sh@295 -- # net_devs=() 00:15:31.052 14:52:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:31.052 14:52:12 -- nvmf/common.sh@296 -- # e810=() 00:15:31.052 14:52:12 -- nvmf/common.sh@296 -- # local -ga e810 00:15:31.052 14:52:12 -- nvmf/common.sh@297 -- # x722=() 00:15:31.052 14:52:12 -- nvmf/common.sh@297 -- # local -ga x722 00:15:31.053 14:52:12 -- nvmf/common.sh@298 -- # mlx=() 00:15:31.053 14:52:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:31.053 14:52:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:31.053 14:52:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:31.053 14:52:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:31.053 14:52:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:31.053 14:52:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:31.053 14:52:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:31.053 14:52:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:31.053 14:52:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:31.053 14:52:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:31.053 14:52:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:31.053 14:52:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:31.053 14:52:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:31.053 14:52:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:31.053 14:52:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:31.053 14:52:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:31.053 14:52:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:31.053 14:52:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:31.053 14:52:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:31.053 14:52:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:31.053 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:31.053 14:52:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:31.053 14:52:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:31.053 14:52:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:31.053 14:52:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:31.053 14:52:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:31.053 14:52:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:31.053 14:52:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:31.053 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:31.053 14:52:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:31.053 14:52:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:31.053 14:52:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:31.053 14:52:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:31.053 14:52:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:31.053 14:52:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:31.053 14:52:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:31.053 14:52:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:31.053 14:52:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:31.053 14:52:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:31.053 14:52:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:31.053 14:52:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:31.053 14:52:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:31.053 Found net devices under 0000:31:00.0: cvl_0_0 00:15:31.053 14:52:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:31.053 14:52:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:31.053 14:52:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:31.053 14:52:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:31.053 14:52:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:31.053 14:52:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:31.053 Found net devices under 0000:31:00.1: cvl_0_1 00:15:31.053 14:52:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:31.053 14:52:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:31.053 14:52:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:31.053 14:52:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:31.053 14:52:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:31.053 14:52:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:31.053 14:52:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:31.053 14:52:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:31.053 14:52:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:31.053 14:52:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:31.053 14:52:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:31.053 14:52:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:31.053 14:52:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:31.053 14:52:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:31.053 14:52:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:31.053 14:52:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:31.053 14:52:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:31.053 14:52:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:31.053 14:52:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:31.053 14:52:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:31.053 14:52:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:31.053 14:52:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:31.053 14:52:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:31.053 14:52:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:31.053 14:52:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:31.053 14:52:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:31.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:31.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:15:31.053 00:15:31.053 --- 10.0.0.2 ping statistics --- 00:15:31.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.053 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:15:31.053 14:52:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:31.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:31.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:15:31.053 00:15:31.053 --- 10.0.0.1 ping statistics --- 00:15:31.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.053 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:15:31.053 14:52:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:31.053 14:52:12 -- nvmf/common.sh@411 -- # return 0 00:15:31.053 14:52:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:31.053 14:52:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:31.053 14:52:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:31.053 14:52:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:31.053 14:52:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:31.053 14:52:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:31.053 14:52:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:31.053 14:52:12 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:31.053 14:52:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:31.053 14:52:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:31.053 14:52:12 -- common/autotest_common.sh@10 -- # set +x 00:15:31.053 14:52:12 -- nvmf/common.sh@470 -- # nvmfpid=1029715 00:15:31.053 14:52:12 -- nvmf/common.sh@471 -- # waitforlisten 1029715 00:15:31.053 14:52:12 -- common/autotest_common.sh@817 -- # '[' -z 1029715 ']' 00:15:31.053 14:52:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.053 14:52:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:31.053 14:52:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.053 14:52:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:31.053 14:52:12 -- common/autotest_common.sh@10 -- # set +x 00:15:31.053 14:52:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:31.053 [2024-04-26 14:52:12.846895] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:31.053 [2024-04-26 14:52:12.846961] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.053 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.053 [2024-04-26 14:52:12.919208] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:31.053 [2024-04-26 14:52:12.993979] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.053 [2024-04-26 14:52:12.994019] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.053 [2024-04-26 14:52:12.994027] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:31.053 [2024-04-26 14:52:12.994035] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:31.053 [2024-04-26 14:52:12.994042] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.053 [2024-04-26 14:52:12.994231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.053 [2024-04-26 14:52:12.994321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:31.053 [2024-04-26 14:52:12.994477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.053 [2024-04-26 14:52:12.994477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:31.053 14:52:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:31.053 14:52:13 -- common/autotest_common.sh@850 -- # return 0 00:15:31.053 14:52:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:31.053 14:52:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:31.053 14:52:13 -- common/autotest_common.sh@10 -- # set +x 00:15:31.053 14:52:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:31.053 14:52:13 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:31.053 14:52:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:31.053 14:52:13 -- common/autotest_common.sh@10 -- # set +x 00:15:31.053 14:52:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:31.053 14:52:13 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:31.053 14:52:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:31.053 14:52:13 -- common/autotest_common.sh@10 -- # set +x 00:15:31.315 14:52:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:31.315 14:52:13 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:31.315 14:52:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:31.315 14:52:13 -- common/autotest_common.sh@10 -- # set +x 00:15:31.315 [2024-04-26 14:52:13.726477] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:31.315 14:52:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:31.315 14:52:13 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:31.315 14:52:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:31.315 14:52:13 -- common/autotest_common.sh@10 -- # set +x 00:15:31.315 Malloc0 00:15:31.315 14:52:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:31.315 14:52:13 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:31.315 14:52:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:31.315 14:52:13 -- common/autotest_common.sh@10 -- # set +x 00:15:31.315 14:52:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:31.315 14:52:13 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:31.315 14:52:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:31.315 14:52:13 -- common/autotest_common.sh@10 -- # set +x 00:15:31.315 14:52:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:31.315 14:52:13 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:31.315 14:52:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:31.315 14:52:13 -- common/autotest_common.sh@10 -- # set +x 00:15:31.315 [2024-04-26 14:52:13.793099] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:31.315 14:52:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:31.315 14:52:13 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1029995 00:15:31.315 14:52:13 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:31.315 14:52:13 -- target/bdev_io_wait.sh@30 -- # READ_PID=1029997 00:15:31.315 14:52:13 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:31.315 14:52:13 -- nvmf/common.sh@521 -- # config=() 00:15:31.315 14:52:13 -- nvmf/common.sh@521 -- # local subsystem config 00:15:31.315 14:52:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:31.315 14:52:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:31.315 { 00:15:31.315 "params": { 00:15:31.315 "name": "Nvme$subsystem", 00:15:31.315 "trtype": "$TEST_TRANSPORT", 00:15:31.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:31.315 "adrfam": "ipv4", 00:15:31.315 "trsvcid": "$NVMF_PORT", 00:15:31.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:31.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:31.315 "hdgst": ${hdgst:-false}, 00:15:31.315 "ddgst": ${ddgst:-false} 00:15:31.315 }, 00:15:31.315 "method": "bdev_nvme_attach_controller" 00:15:31.315 } 00:15:31.315 EOF 00:15:31.315 )") 00:15:31.315 14:52:13 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1029999 00:15:31.315 14:52:13 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:31.315 14:52:13 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:31.315 14:52:13 -- nvmf/common.sh@521 -- # config=() 00:15:31.315 14:52:13 -- nvmf/common.sh@521 -- # local subsystem config 00:15:31.315 14:52:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:31.315 14:52:13 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1030002 00:15:31.315 14:52:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:31.315 { 00:15:31.315 "params": { 00:15:31.315 "name": "Nvme$subsystem", 00:15:31.315 "trtype": "$TEST_TRANSPORT", 00:15:31.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:31.315 "adrfam": "ipv4", 00:15:31.315 "trsvcid": "$NVMF_PORT", 00:15:31.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:31.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:31.315 "hdgst": ${hdgst:-false}, 00:15:31.315 "ddgst": ${ddgst:-false} 00:15:31.315 }, 00:15:31.315 "method": "bdev_nvme_attach_controller" 00:15:31.315 } 00:15:31.315 EOF 00:15:31.315 )") 00:15:31.315 14:52:13 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:31.315 14:52:13 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:31.315 14:52:13 -- target/bdev_io_wait.sh@35 -- # sync 00:15:31.315 14:52:13 -- nvmf/common.sh@521 -- # config=() 00:15:31.315 14:52:13 -- nvmf/common.sh@543 -- # cat 00:15:31.315 14:52:13 -- nvmf/common.sh@521 -- # local subsystem config 00:15:31.315 14:52:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:31.315 14:52:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:31.315 { 00:15:31.315 "params": { 00:15:31.315 "name": "Nvme$subsystem", 00:15:31.315 "trtype": "$TEST_TRANSPORT", 00:15:31.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:31.315 "adrfam": "ipv4", 00:15:31.315 "trsvcid": "$NVMF_PORT", 00:15:31.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:31.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:31.315 "hdgst": ${hdgst:-false}, 00:15:31.315 "ddgst": ${ddgst:-false} 00:15:31.315 }, 00:15:31.315 "method": "bdev_nvme_attach_controller" 00:15:31.315 } 00:15:31.315 EOF 00:15:31.315 )") 00:15:31.315 14:52:13 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:31.315 14:52:13 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:31.315 14:52:13 -- nvmf/common.sh@521 -- # config=() 00:15:31.315 14:52:13 -- nvmf/common.sh@521 -- # local subsystem config 00:15:31.315 14:52:13 -- nvmf/common.sh@543 -- # cat 00:15:31.315 14:52:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:31.315 14:52:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:31.315 { 00:15:31.315 "params": { 00:15:31.315 "name": "Nvme$subsystem", 00:15:31.315 "trtype": "$TEST_TRANSPORT", 00:15:31.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:31.315 "adrfam": "ipv4", 00:15:31.315 "trsvcid": "$NVMF_PORT", 00:15:31.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:31.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:31.315 "hdgst": ${hdgst:-false}, 00:15:31.315 "ddgst": ${ddgst:-false} 00:15:31.315 }, 00:15:31.315 "method": "bdev_nvme_attach_controller" 00:15:31.315 } 00:15:31.315 EOF 00:15:31.315 )") 00:15:31.315 14:52:13 -- nvmf/common.sh@543 -- # cat 00:15:31.315 14:52:13 -- target/bdev_io_wait.sh@37 -- # wait 1029995 00:15:31.315 14:52:13 -- nvmf/common.sh@543 -- # cat 00:15:31.315 14:52:13 -- nvmf/common.sh@545 -- # jq . 00:15:31.315 14:52:13 -- nvmf/common.sh@545 -- # jq . 00:15:31.315 14:52:13 -- nvmf/common.sh@545 -- # jq . 00:15:31.315 14:52:13 -- nvmf/common.sh@546 -- # IFS=, 00:15:31.315 14:52:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:31.315 "params": { 00:15:31.315 "name": "Nvme1", 00:15:31.315 "trtype": "tcp", 00:15:31.315 "traddr": "10.0.0.2", 00:15:31.315 "adrfam": "ipv4", 00:15:31.315 "trsvcid": "4420", 00:15:31.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:31.315 "hdgst": false, 00:15:31.315 "ddgst": false 00:15:31.315 }, 00:15:31.315 "method": "bdev_nvme_attach_controller" 00:15:31.315 }' 00:15:31.315 14:52:13 -- nvmf/common.sh@545 -- # jq . 00:15:31.315 14:52:13 -- nvmf/common.sh@546 -- # IFS=, 00:15:31.315 14:52:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:31.315 "params": { 00:15:31.315 "name": "Nvme1", 00:15:31.315 "trtype": "tcp", 00:15:31.315 "traddr": "10.0.0.2", 00:15:31.315 "adrfam": "ipv4", 00:15:31.315 "trsvcid": "4420", 00:15:31.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:31.315 "hdgst": false, 00:15:31.315 "ddgst": false 00:15:31.315 }, 00:15:31.315 "method": "bdev_nvme_attach_controller" 00:15:31.315 }' 00:15:31.315 14:52:13 -- nvmf/common.sh@546 -- # IFS=, 00:15:31.315 14:52:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:31.315 "params": { 00:15:31.315 "name": "Nvme1", 00:15:31.315 "trtype": "tcp", 00:15:31.315 "traddr": "10.0.0.2", 00:15:31.315 "adrfam": "ipv4", 00:15:31.315 "trsvcid": "4420", 00:15:31.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:31.315 "hdgst": false, 00:15:31.315 "ddgst": false 00:15:31.315 }, 00:15:31.315 "method": "bdev_nvme_attach_controller" 00:15:31.315 }' 00:15:31.315 14:52:13 -- nvmf/common.sh@546 -- # IFS=, 00:15:31.315 14:52:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:31.315 "params": { 00:15:31.315 "name": "Nvme1", 00:15:31.315 "trtype": "tcp", 00:15:31.315 "traddr": "10.0.0.2", 00:15:31.315 "adrfam": "ipv4", 00:15:31.315 "trsvcid": "4420", 00:15:31.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:31.315 "hdgst": false, 00:15:31.315 "ddgst": false 00:15:31.315 }, 00:15:31.315 "method": "bdev_nvme_attach_controller" 00:15:31.315 }' 00:15:31.315 [2024-04-26 14:52:13.845999] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:31.315 [2024-04-26 14:52:13.846052] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:31.315 [2024-04-26 14:52:13.846438] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:31.315 [2024-04-26 14:52:13.846483] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:31.315 [2024-04-26 14:52:13.849182] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:31.315 [2024-04-26 14:52:13.849225] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:31.315 [2024-04-26 14:52:13.855415] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:31.315 [2024-04-26 14:52:13.855506] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:31.315 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.315 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.575 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.575 [2024-04-26 14:52:13.996117] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.575 [2024-04-26 14:52:14.037509] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.575 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.575 [2024-04-26 14:52:14.046408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:31.575 [2024-04-26 14:52:14.086758] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.575 [2024-04-26 14:52:14.089514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:31.575 [2024-04-26 14:52:14.134901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:31.575 [2024-04-26 14:52:14.148701] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.575 [2024-04-26 14:52:14.197513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:31.835 Running I/O for 1 seconds... 00:15:31.835 Running I/O for 1 seconds... 00:15:31.835 Running I/O for 1 seconds... 00:15:31.835 Running I/O for 1 seconds... 00:15:32.777 00:15:32.777 Latency(us) 00:15:32.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.777 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:32.777 Nvme1n1 : 1.01 11766.04 45.96 0.00 0.00 10844.50 5734.40 20206.93 00:15:32.777 =================================================================================================================== 00:15:32.777 Total : 11766.04 45.96 0.00 0.00 10844.50 5734.40 20206.93 00:15:32.777 00:15:32.777 Latency(us) 00:15:32.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.777 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:32.777 Nvme1n1 : 1.00 12299.78 48.05 0.00 0.00 10378.01 4833.28 19551.57 00:15:32.777 =================================================================================================================== 00:15:32.777 Total : 12299.78 48.05 0.00 0.00 10378.01 4833.28 19551.57 00:15:32.777 00:15:32.777 Latency(us) 00:15:32.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.777 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:32.777 Nvme1n1 : 1.00 15970.30 62.38 0.00 0.00 7993.24 4478.29 17913.17 00:15:32.777 =================================================================================================================== 00:15:32.777 Total : 15970.30 62.38 0.00 0.00 7993.24 4478.29 17913.17 00:15:33.039 14:52:15 -- target/bdev_io_wait.sh@38 -- # wait 1029997 00:15:33.039 00:15:33.039 Latency(us) 00:15:33.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.039 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:33.039 Nvme1n1 : 1.00 192106.91 750.42 0.00 0.00 663.67 261.12 768.00 00:15:33.039 =================================================================================================================== 00:15:33.039 Total : 192106.91 750.42 0.00 0.00 663.67 261.12 768.00 00:15:33.039 14:52:15 -- target/bdev_io_wait.sh@39 -- # wait 1029999 00:15:33.039 14:52:15 -- target/bdev_io_wait.sh@40 -- # wait 1030002 00:15:33.039 14:52:15 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:33.039 14:52:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:33.039 14:52:15 -- common/autotest_common.sh@10 -- # set +x 00:15:33.039 14:52:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:33.039 14:52:15 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:33.039 14:52:15 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:33.039 14:52:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:33.039 14:52:15 -- nvmf/common.sh@117 -- # sync 00:15:33.039 14:52:15 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:33.039 14:52:15 -- nvmf/common.sh@120 -- # set +e 00:15:33.039 14:52:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:33.039 14:52:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:33.039 rmmod nvme_tcp 00:15:33.039 rmmod nvme_fabrics 00:15:33.039 rmmod nvme_keyring 00:15:33.039 14:52:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:33.039 14:52:15 -- nvmf/common.sh@124 -- # set -e 00:15:33.039 14:52:15 -- nvmf/common.sh@125 -- # return 0 00:15:33.039 14:52:15 -- nvmf/common.sh@478 -- # '[' -n 1029715 ']' 00:15:33.039 14:52:15 -- nvmf/common.sh@479 -- # killprocess 1029715 00:15:33.039 14:52:15 -- common/autotest_common.sh@936 -- # '[' -z 1029715 ']' 00:15:33.039 14:52:15 -- common/autotest_common.sh@940 -- # kill -0 1029715 00:15:33.039 14:52:15 -- common/autotest_common.sh@941 -- # uname 00:15:33.039 14:52:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:33.039 14:52:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1029715 00:15:33.300 14:52:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:33.300 14:52:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:33.300 14:52:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1029715' 00:15:33.300 killing process with pid 1029715 00:15:33.300 14:52:15 -- common/autotest_common.sh@955 -- # kill 1029715 00:15:33.300 14:52:15 -- common/autotest_common.sh@960 -- # wait 1029715 00:15:33.300 14:52:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:33.300 14:52:15 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:33.300 14:52:15 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:33.300 14:52:15 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:33.300 14:52:15 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:33.300 14:52:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.300 14:52:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:33.300 14:52:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.850 14:52:17 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:35.850 00:15:35.850 real 0m12.749s 00:15:35.850 user 0m18.991s 00:15:35.850 sys 0m6.947s 00:15:35.850 14:52:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:35.850 14:52:17 -- common/autotest_common.sh@10 -- # set +x 00:15:35.850 ************************************ 00:15:35.850 END TEST nvmf_bdev_io_wait 00:15:35.850 ************************************ 00:15:35.850 14:52:17 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:35.850 14:52:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:35.850 14:52:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:35.850 14:52:17 -- common/autotest_common.sh@10 -- # set +x 00:15:35.850 ************************************ 00:15:35.850 START TEST nvmf_queue_depth 00:15:35.850 ************************************ 00:15:35.850 14:52:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:35.850 * Looking for test storage... 00:15:35.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:35.850 14:52:18 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:35.850 14:52:18 -- nvmf/common.sh@7 -- # uname -s 00:15:35.850 14:52:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:35.850 14:52:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:35.850 14:52:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:35.850 14:52:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:35.850 14:52:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:35.850 14:52:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:35.850 14:52:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:35.850 14:52:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:35.850 14:52:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:35.850 14:52:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:35.850 14:52:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:35.850 14:52:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:35.850 14:52:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:35.850 14:52:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:35.850 14:52:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:35.850 14:52:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:35.850 14:52:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:35.850 14:52:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:35.850 14:52:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:35.850 14:52:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:35.850 14:52:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.850 14:52:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.850 14:52:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.850 14:52:18 -- paths/export.sh@5 -- # export PATH 00:15:35.850 14:52:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.850 14:52:18 -- nvmf/common.sh@47 -- # : 0 00:15:35.850 14:52:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:35.850 14:52:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:35.850 14:52:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:35.850 14:52:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:35.850 14:52:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:35.850 14:52:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:35.850 14:52:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:35.850 14:52:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:35.850 14:52:18 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:35.850 14:52:18 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:35.850 14:52:18 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:35.850 14:52:18 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:35.850 14:52:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:35.850 14:52:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:35.850 14:52:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:35.850 14:52:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:35.850 14:52:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:35.850 14:52:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.850 14:52:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:35.850 14:52:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.850 14:52:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:35.850 14:52:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:35.850 14:52:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:35.850 14:52:18 -- common/autotest_common.sh@10 -- # set +x 00:15:43.998 14:52:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:43.998 14:52:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:43.998 14:52:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:43.998 14:52:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:43.998 14:52:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:43.998 14:52:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:43.998 14:52:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:43.998 14:52:25 -- nvmf/common.sh@295 -- # net_devs=() 00:15:43.998 14:52:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:43.998 14:52:25 -- nvmf/common.sh@296 -- # e810=() 00:15:43.998 14:52:25 -- nvmf/common.sh@296 -- # local -ga e810 00:15:43.998 14:52:25 -- nvmf/common.sh@297 -- # x722=() 00:15:43.998 14:52:25 -- nvmf/common.sh@297 -- # local -ga x722 00:15:43.998 14:52:25 -- nvmf/common.sh@298 -- # mlx=() 00:15:43.998 14:52:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:43.998 14:52:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:43.998 14:52:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:43.998 14:52:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:43.998 14:52:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:43.998 14:52:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:43.998 14:52:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:43.998 14:52:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:43.998 14:52:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:43.998 14:52:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:43.998 14:52:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:43.998 14:52:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:43.998 14:52:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:43.998 14:52:25 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:43.998 14:52:25 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:43.998 14:52:25 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:43.998 14:52:25 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:43.998 14:52:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:43.998 14:52:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:43.998 14:52:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:43.998 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:43.998 14:52:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:43.998 14:52:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:43.998 14:52:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:43.998 14:52:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:43.998 14:52:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:43.998 14:52:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:43.998 14:52:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:43.998 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:43.998 14:52:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:43.998 14:52:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:43.998 14:52:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:43.998 14:52:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:43.998 14:52:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:43.998 14:52:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:43.998 14:52:25 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:43.998 14:52:25 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:43.999 14:52:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:43.999 14:52:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.999 14:52:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:43.999 14:52:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.999 14:52:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:43.999 Found net devices under 0000:31:00.0: cvl_0_0 00:15:43.999 14:52:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.999 14:52:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:43.999 14:52:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.999 14:52:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:43.999 14:52:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.999 14:52:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:43.999 Found net devices under 0000:31:00.1: cvl_0_1 00:15:43.999 14:52:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.999 14:52:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:43.999 14:52:25 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:43.999 14:52:25 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:43.999 14:52:25 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:43.999 14:52:25 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:43.999 14:52:25 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:43.999 14:52:25 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:43.999 14:52:25 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:43.999 14:52:25 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:43.999 14:52:25 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:43.999 14:52:25 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:43.999 14:52:25 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:43.999 14:52:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:43.999 14:52:25 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:43.999 14:52:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:43.999 14:52:25 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:43.999 14:52:25 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:43.999 14:52:25 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:43.999 14:52:25 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:43.999 14:52:25 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:43.999 14:52:25 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:43.999 14:52:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:43.999 14:52:25 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:43.999 14:52:25 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:43.999 14:52:25 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:43.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:43.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:15:43.999 00:15:43.999 --- 10.0.0.2 ping statistics --- 00:15:43.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.999 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:15:43.999 14:52:25 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:43.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:43.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:15:43.999 00:15:43.999 --- 10.0.0.1 ping statistics --- 00:15:43.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.999 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:15:43.999 14:52:25 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:43.999 14:52:25 -- nvmf/common.sh@411 -- # return 0 00:15:43.999 14:52:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:43.999 14:52:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:43.999 14:52:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:43.999 14:52:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:43.999 14:52:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:43.999 14:52:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:43.999 14:52:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:43.999 14:52:25 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:43.999 14:52:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:43.999 14:52:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:43.999 14:52:25 -- common/autotest_common.sh@10 -- # set +x 00:15:43.999 14:52:25 -- nvmf/common.sh@470 -- # nvmfpid=1034715 00:15:43.999 14:52:25 -- nvmf/common.sh@471 -- # waitforlisten 1034715 00:15:43.999 14:52:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:43.999 14:52:25 -- common/autotest_common.sh@817 -- # '[' -z 1034715 ']' 00:15:43.999 14:52:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.999 14:52:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:43.999 14:52:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.999 14:52:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:43.999 14:52:25 -- common/autotest_common.sh@10 -- # set +x 00:15:43.999 [2024-04-26 14:52:25.618820] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:43.999 [2024-04-26 14:52:25.618873] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.999 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.999 [2024-04-26 14:52:25.700242] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.999 [2024-04-26 14:52:25.769440] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.999 [2024-04-26 14:52:25.769485] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.999 [2024-04-26 14:52:25.769493] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.999 [2024-04-26 14:52:25.769499] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.999 [2024-04-26 14:52:25.769505] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.999 [2024-04-26 14:52:25.769533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.999 14:52:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:43.999 14:52:26 -- common/autotest_common.sh@850 -- # return 0 00:15:43.999 14:52:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:43.999 14:52:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:43.999 14:52:26 -- common/autotest_common.sh@10 -- # set +x 00:15:43.999 14:52:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.999 14:52:26 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:43.999 14:52:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.999 14:52:26 -- common/autotest_common.sh@10 -- # set +x 00:15:43.999 [2024-04-26 14:52:26.436286] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:43.999 14:52:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.999 14:52:26 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:43.999 14:52:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.999 14:52:26 -- common/autotest_common.sh@10 -- # set +x 00:15:43.999 Malloc0 00:15:43.999 14:52:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.999 14:52:26 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:43.999 14:52:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.999 14:52:26 -- common/autotest_common.sh@10 -- # set +x 00:15:43.999 14:52:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.999 14:52:26 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:43.999 14:52:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.999 14:52:26 -- common/autotest_common.sh@10 -- # set +x 00:15:43.999 14:52:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.999 14:52:26 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:43.999 14:52:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.999 14:52:26 -- common/autotest_common.sh@10 -- # set +x 00:15:43.999 [2024-04-26 14:52:26.494557] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:43.999 14:52:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.999 14:52:26 -- target/queue_depth.sh@30 -- # bdevperf_pid=1034787 00:15:43.999 14:52:26 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:43.999 14:52:26 -- target/queue_depth.sh@33 -- # waitforlisten 1034787 /var/tmp/bdevperf.sock 00:15:43.999 14:52:26 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:43.999 14:52:26 -- common/autotest_common.sh@817 -- # '[' -z 1034787 ']' 00:15:43.999 14:52:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:43.999 14:52:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:43.999 14:52:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:43.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:43.999 14:52:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:43.999 14:52:26 -- common/autotest_common.sh@10 -- # set +x 00:15:43.999 [2024-04-26 14:52:26.547517] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:43.999 [2024-04-26 14:52:26.547576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1034787 ] 00:15:43.999 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.999 [2024-04-26 14:52:26.611847] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.260 [2024-04-26 14:52:26.675416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.830 14:52:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:44.830 14:52:27 -- common/autotest_common.sh@850 -- # return 0 00:15:44.830 14:52:27 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:44.830 14:52:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.830 14:52:27 -- common/autotest_common.sh@10 -- # set +x 00:15:44.830 NVMe0n1 00:15:44.830 14:52:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.830 14:52:27 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:45.090 Running I/O for 10 seconds... 00:15:55.186 00:15:55.186 Latency(us) 00:15:55.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.186 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:55.186 Verification LBA range: start 0x0 length 0x4000 00:15:55.186 NVMe0n1 : 10.04 11320.78 44.22 0.00 0.00 90165.50 4314.45 71215.79 00:15:55.186 =================================================================================================================== 00:15:55.186 Total : 11320.78 44.22 0.00 0.00 90165.50 4314.45 71215.79 00:15:55.186 0 00:15:55.186 14:52:37 -- target/queue_depth.sh@39 -- # killprocess 1034787 00:15:55.186 14:52:37 -- common/autotest_common.sh@936 -- # '[' -z 1034787 ']' 00:15:55.186 14:52:37 -- common/autotest_common.sh@940 -- # kill -0 1034787 00:15:55.186 14:52:37 -- common/autotest_common.sh@941 -- # uname 00:15:55.187 14:52:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:55.187 14:52:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1034787 00:15:55.187 14:52:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:55.187 14:52:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:55.187 14:52:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1034787' 00:15:55.187 killing process with pid 1034787 00:15:55.187 14:52:37 -- common/autotest_common.sh@955 -- # kill 1034787 00:15:55.187 Received shutdown signal, test time was about 10.000000 seconds 00:15:55.187 00:15:55.187 Latency(us) 00:15:55.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.187 =================================================================================================================== 00:15:55.187 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:55.187 14:52:37 -- common/autotest_common.sh@960 -- # wait 1034787 00:15:55.187 14:52:37 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:55.187 14:52:37 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:55.187 14:52:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:55.187 14:52:37 -- nvmf/common.sh@117 -- # sync 00:15:55.187 14:52:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:55.187 14:52:37 -- nvmf/common.sh@120 -- # set +e 00:15:55.187 14:52:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:55.187 14:52:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:55.187 rmmod nvme_tcp 00:15:55.187 rmmod nvme_fabrics 00:15:55.187 rmmod nvme_keyring 00:15:55.447 14:52:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:55.447 14:52:37 -- nvmf/common.sh@124 -- # set -e 00:15:55.447 14:52:37 -- nvmf/common.sh@125 -- # return 0 00:15:55.447 14:52:37 -- nvmf/common.sh@478 -- # '[' -n 1034715 ']' 00:15:55.447 14:52:37 -- nvmf/common.sh@479 -- # killprocess 1034715 00:15:55.447 14:52:37 -- common/autotest_common.sh@936 -- # '[' -z 1034715 ']' 00:15:55.447 14:52:37 -- common/autotest_common.sh@940 -- # kill -0 1034715 00:15:55.447 14:52:37 -- common/autotest_common.sh@941 -- # uname 00:15:55.447 14:52:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:55.447 14:52:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1034715 00:15:55.447 14:52:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:55.447 14:52:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:55.447 14:52:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1034715' 00:15:55.447 killing process with pid 1034715 00:15:55.447 14:52:37 -- common/autotest_common.sh@955 -- # kill 1034715 00:15:55.447 14:52:37 -- common/autotest_common.sh@960 -- # wait 1034715 00:15:55.447 14:52:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:55.447 14:52:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:55.447 14:52:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:55.447 14:52:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:55.447 14:52:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:55.447 14:52:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.447 14:52:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:55.447 14:52:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.990 14:52:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:57.990 00:15:57.990 real 0m21.968s 00:15:57.990 user 0m25.442s 00:15:57.990 sys 0m6.571s 00:15:57.990 14:52:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:57.990 14:52:40 -- common/autotest_common.sh@10 -- # set +x 00:15:57.990 ************************************ 00:15:57.991 END TEST nvmf_queue_depth 00:15:57.991 ************************************ 00:15:57.991 14:52:40 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:57.991 14:52:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:57.991 14:52:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:57.991 14:52:40 -- common/autotest_common.sh@10 -- # set +x 00:15:57.991 ************************************ 00:15:57.991 START TEST nvmf_multipath 00:15:57.991 ************************************ 00:15:57.991 14:52:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:57.991 * Looking for test storage... 00:15:57.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:57.991 14:52:40 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:57.991 14:52:40 -- nvmf/common.sh@7 -- # uname -s 00:15:57.991 14:52:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.991 14:52:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.991 14:52:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.991 14:52:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.991 14:52:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.991 14:52:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.991 14:52:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.991 14:52:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.991 14:52:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.991 14:52:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.991 14:52:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:57.991 14:52:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:57.991 14:52:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.991 14:52:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.991 14:52:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:57.991 14:52:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:57.991 14:52:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:57.991 14:52:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.991 14:52:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.991 14:52:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.991 14:52:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.991 14:52:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.991 14:52:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.991 14:52:40 -- paths/export.sh@5 -- # export PATH 00:15:57.991 14:52:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.991 14:52:40 -- nvmf/common.sh@47 -- # : 0 00:15:57.991 14:52:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:57.991 14:52:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:57.991 14:52:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:57.991 14:52:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.991 14:52:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.991 14:52:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:57.991 14:52:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:57.991 14:52:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:57.991 14:52:40 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:57.991 14:52:40 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:57.991 14:52:40 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:57.991 14:52:40 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:57.991 14:52:40 -- target/multipath.sh@43 -- # nvmftestinit 00:15:57.991 14:52:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:57.991 14:52:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:57.991 14:52:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:57.991 14:52:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:57.991 14:52:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:57.991 14:52:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.991 14:52:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:57.991 14:52:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.991 14:52:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:57.991 14:52:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:57.991 14:52:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:57.991 14:52:40 -- common/autotest_common.sh@10 -- # set +x 00:16:04.584 14:52:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:04.584 14:52:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:04.584 14:52:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:04.584 14:52:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:04.584 14:52:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:04.584 14:52:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:04.584 14:52:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:04.584 14:52:47 -- nvmf/common.sh@295 -- # net_devs=() 00:16:04.584 14:52:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:04.584 14:52:47 -- nvmf/common.sh@296 -- # e810=() 00:16:04.584 14:52:47 -- nvmf/common.sh@296 -- # local -ga e810 00:16:04.584 14:52:47 -- nvmf/common.sh@297 -- # x722=() 00:16:04.584 14:52:47 -- nvmf/common.sh@297 -- # local -ga x722 00:16:04.584 14:52:47 -- nvmf/common.sh@298 -- # mlx=() 00:16:04.584 14:52:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:04.584 14:52:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:04.584 14:52:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:04.584 14:52:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:04.584 14:52:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:04.584 14:52:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:04.584 14:52:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:04.584 14:52:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:04.584 14:52:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:04.584 14:52:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:04.584 14:52:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:04.584 14:52:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:04.584 14:52:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:04.584 14:52:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:04.584 14:52:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:04.584 14:52:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:04.584 14:52:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:04.584 14:52:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:04.584 14:52:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:04.584 14:52:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:04.584 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:04.584 14:52:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:04.584 14:52:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:04.584 14:52:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.584 14:52:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.584 14:52:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:04.584 14:52:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:04.584 14:52:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:04.584 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:04.584 14:52:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:04.584 14:52:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:04.584 14:52:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.584 14:52:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.584 14:52:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:04.584 14:52:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:04.584 14:52:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:04.584 14:52:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:04.584 14:52:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:04.584 14:52:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.584 14:52:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:04.584 14:52:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.584 14:52:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:04.584 Found net devices under 0000:31:00.0: cvl_0_0 00:16:04.584 14:52:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.584 14:52:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:04.584 14:52:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.584 14:52:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:04.584 14:52:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.584 14:52:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:04.584 Found net devices under 0000:31:00.1: cvl_0_1 00:16:04.584 14:52:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.584 14:52:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:04.584 14:52:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:04.584 14:52:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:04.584 14:52:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:04.584 14:52:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:04.584 14:52:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:04.584 14:52:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:04.584 14:52:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:04.584 14:52:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:04.584 14:52:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:04.584 14:52:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:04.584 14:52:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:04.584 14:52:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:04.584 14:52:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:04.584 14:52:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:04.584 14:52:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:04.584 14:52:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:04.584 14:52:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:04.846 14:52:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:04.846 14:52:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:04.846 14:52:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:04.846 14:52:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:04.846 14:52:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:04.846 14:52:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:04.846 14:52:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:04.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:04.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:16:04.846 00:16:04.846 --- 10.0.0.2 ping statistics --- 00:16:04.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.846 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:16:04.846 14:52:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:05.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:05.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:16:05.107 00:16:05.107 --- 10.0.0.1 ping statistics --- 00:16:05.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.107 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:16:05.107 14:52:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:05.107 14:52:47 -- nvmf/common.sh@411 -- # return 0 00:16:05.107 14:52:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:05.107 14:52:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:05.107 14:52:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:05.107 14:52:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:05.107 14:52:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:05.107 14:52:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:05.107 14:52:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:05.107 14:52:47 -- target/multipath.sh@45 -- # '[' -z ']' 00:16:05.107 14:52:47 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:05.107 only one NIC for nvmf test 00:16:05.107 14:52:47 -- target/multipath.sh@47 -- # nvmftestfini 00:16:05.107 14:52:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:05.107 14:52:47 -- nvmf/common.sh@117 -- # sync 00:16:05.107 14:52:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:05.107 14:52:47 -- nvmf/common.sh@120 -- # set +e 00:16:05.107 14:52:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:05.107 14:52:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:05.107 rmmod nvme_tcp 00:16:05.107 rmmod nvme_fabrics 00:16:05.107 rmmod nvme_keyring 00:16:05.107 14:52:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:05.107 14:52:47 -- nvmf/common.sh@124 -- # set -e 00:16:05.107 14:52:47 -- nvmf/common.sh@125 -- # return 0 00:16:05.107 14:52:47 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:16:05.107 14:52:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:05.107 14:52:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:05.107 14:52:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:05.107 14:52:47 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:05.107 14:52:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:05.107 14:52:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.107 14:52:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.107 14:52:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.652 14:52:49 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:07.652 14:52:49 -- target/multipath.sh@48 -- # exit 0 00:16:07.652 14:52:49 -- target/multipath.sh@1 -- # nvmftestfini 00:16:07.652 14:52:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:07.652 14:52:49 -- nvmf/common.sh@117 -- # sync 00:16:07.652 14:52:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:07.652 14:52:49 -- nvmf/common.sh@120 -- # set +e 00:16:07.652 14:52:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:07.652 14:52:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:07.652 14:52:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:07.652 14:52:49 -- nvmf/common.sh@124 -- # set -e 00:16:07.652 14:52:49 -- nvmf/common.sh@125 -- # return 0 00:16:07.652 14:52:49 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:16:07.652 14:52:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:07.652 14:52:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:07.652 14:52:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:07.652 14:52:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:07.652 14:52:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:07.652 14:52:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.652 14:52:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.652 14:52:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.652 14:52:49 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:07.652 00:16:07.652 real 0m9.427s 00:16:07.652 user 0m2.043s 00:16:07.652 sys 0m5.271s 00:16:07.652 14:52:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:07.652 14:52:49 -- common/autotest_common.sh@10 -- # set +x 00:16:07.652 ************************************ 00:16:07.652 END TEST nvmf_multipath 00:16:07.652 ************************************ 00:16:07.652 14:52:49 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:07.652 14:52:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:07.652 14:52:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:07.652 14:52:49 -- common/autotest_common.sh@10 -- # set +x 00:16:07.652 ************************************ 00:16:07.652 START TEST nvmf_zcopy 00:16:07.652 ************************************ 00:16:07.652 14:52:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:07.652 * Looking for test storage... 00:16:07.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:07.652 14:52:50 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:07.652 14:52:50 -- nvmf/common.sh@7 -- # uname -s 00:16:07.652 14:52:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.652 14:52:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.652 14:52:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.652 14:52:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.652 14:52:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.652 14:52:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.652 14:52:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.652 14:52:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.652 14:52:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.652 14:52:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.652 14:52:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:07.652 14:52:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:07.652 14:52:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.652 14:52:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.652 14:52:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:07.652 14:52:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:07.653 14:52:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:07.653 14:52:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.653 14:52:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.653 14:52:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.653 14:52:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.653 14:52:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.653 14:52:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.653 14:52:50 -- paths/export.sh@5 -- # export PATH 00:16:07.653 14:52:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.653 14:52:50 -- nvmf/common.sh@47 -- # : 0 00:16:07.653 14:52:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:07.653 14:52:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:07.653 14:52:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:07.653 14:52:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.653 14:52:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.653 14:52:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:07.653 14:52:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:07.653 14:52:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:07.653 14:52:50 -- target/zcopy.sh@12 -- # nvmftestinit 00:16:07.653 14:52:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:07.653 14:52:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:07.653 14:52:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:07.653 14:52:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:07.653 14:52:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:07.653 14:52:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.653 14:52:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.653 14:52:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.653 14:52:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:07.653 14:52:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:07.653 14:52:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:07.653 14:52:50 -- common/autotest_common.sh@10 -- # set +x 00:16:15.795 14:52:56 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:15.795 14:52:56 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:15.795 14:52:56 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:15.795 14:52:56 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:15.795 14:52:56 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:15.795 14:52:56 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:15.795 14:52:56 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:15.795 14:52:56 -- nvmf/common.sh@295 -- # net_devs=() 00:16:15.795 14:52:56 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:15.795 14:52:56 -- nvmf/common.sh@296 -- # e810=() 00:16:15.795 14:52:56 -- nvmf/common.sh@296 -- # local -ga e810 00:16:15.795 14:52:56 -- nvmf/common.sh@297 -- # x722=() 00:16:15.795 14:52:56 -- nvmf/common.sh@297 -- # local -ga x722 00:16:15.795 14:52:56 -- nvmf/common.sh@298 -- # mlx=() 00:16:15.795 14:52:56 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:15.795 14:52:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:15.795 14:52:56 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:15.795 14:52:56 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:15.795 14:52:56 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:15.795 14:52:56 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:15.795 14:52:56 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:15.795 14:52:56 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:15.795 14:52:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:15.795 14:52:56 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:15.795 14:52:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:15.795 14:52:56 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:15.795 14:52:56 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:15.795 14:52:56 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:15.795 14:52:56 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:15.795 14:52:56 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:15.795 14:52:56 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:15.795 14:52:56 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:15.795 14:52:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:15.795 14:52:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:15.795 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:15.795 14:52:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:15.795 14:52:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:15.795 14:52:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:15.795 14:52:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:15.795 14:52:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:15.795 14:52:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:15.795 14:52:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:15.795 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:15.795 14:52:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:15.795 14:52:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:15.795 14:52:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:15.795 14:52:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:15.795 14:52:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:15.795 14:52:56 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:15.795 14:52:56 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:15.795 14:52:56 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:15.795 14:52:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:15.795 14:52:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.795 14:52:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:15.795 14:52:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.795 14:52:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:15.795 Found net devices under 0000:31:00.0: cvl_0_0 00:16:15.795 14:52:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.795 14:52:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:15.795 14:52:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.795 14:52:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:15.795 14:52:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.795 14:52:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:15.795 Found net devices under 0000:31:00.1: cvl_0_1 00:16:15.795 14:52:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.795 14:52:56 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:15.795 14:52:56 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:15.795 14:52:56 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:15.795 14:52:56 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:15.795 14:52:56 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:15.795 14:52:56 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:15.795 14:52:56 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:15.795 14:52:56 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:15.795 14:52:56 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:15.795 14:52:56 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:15.795 14:52:56 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:15.795 14:52:56 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:15.795 14:52:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:15.795 14:52:56 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:15.795 14:52:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:15.795 14:52:56 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:15.795 14:52:56 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:15.795 14:52:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:15.795 14:52:57 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:15.795 14:52:57 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:15.795 14:52:57 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:15.795 14:52:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:15.795 14:52:57 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:15.795 14:52:57 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:15.795 14:52:57 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:15.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:15.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:16:15.795 00:16:15.795 --- 10.0.0.2 ping statistics --- 00:16:15.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.795 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:16:15.795 14:52:57 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:15.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:15.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:16:15.795 00:16:15.795 --- 10.0.0.1 ping statistics --- 00:16:15.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.795 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:16:15.795 14:52:57 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:15.795 14:52:57 -- nvmf/common.sh@411 -- # return 0 00:16:15.795 14:52:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:15.795 14:52:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:15.795 14:52:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:15.795 14:52:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:15.795 14:52:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:15.795 14:52:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:15.795 14:52:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:15.795 14:52:57 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:15.795 14:52:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:15.795 14:52:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:15.795 14:52:57 -- common/autotest_common.sh@10 -- # set +x 00:16:15.795 14:52:57 -- nvmf/common.sh@470 -- # nvmfpid=1045585 00:16:15.795 14:52:57 -- nvmf/common.sh@471 -- # waitforlisten 1045585 00:16:15.795 14:52:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:15.795 14:52:57 -- common/autotest_common.sh@817 -- # '[' -z 1045585 ']' 00:16:15.795 14:52:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.795 14:52:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:15.795 14:52:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.795 14:52:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:15.795 14:52:57 -- common/autotest_common.sh@10 -- # set +x 00:16:15.795 [2024-04-26 14:52:57.396178] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:15.795 [2024-04-26 14:52:57.396226] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.795 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.795 [2024-04-26 14:52:57.477538] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.795 [2024-04-26 14:52:57.539265] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.795 [2024-04-26 14:52:57.539300] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.795 [2024-04-26 14:52:57.539308] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.796 [2024-04-26 14:52:57.539314] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.796 [2024-04-26 14:52:57.539320] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.796 [2024-04-26 14:52:57.539345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.796 14:52:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:15.796 14:52:58 -- common/autotest_common.sh@850 -- # return 0 00:16:15.796 14:52:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:15.796 14:52:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:15.796 14:52:58 -- common/autotest_common.sh@10 -- # set +x 00:16:15.796 14:52:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.796 14:52:58 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:15.796 14:52:58 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:15.796 14:52:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.796 14:52:58 -- common/autotest_common.sh@10 -- # set +x 00:16:15.796 [2024-04-26 14:52:58.217636] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.796 14:52:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.796 14:52:58 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:15.796 14:52:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.796 14:52:58 -- common/autotest_common.sh@10 -- # set +x 00:16:15.796 14:52:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.796 14:52:58 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:15.796 14:52:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.796 14:52:58 -- common/autotest_common.sh@10 -- # set +x 00:16:15.796 [2024-04-26 14:52:58.241917] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:15.796 14:52:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.796 14:52:58 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:15.796 14:52:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.796 14:52:58 -- common/autotest_common.sh@10 -- # set +x 00:16:15.796 14:52:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.796 14:52:58 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:15.796 14:52:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.796 14:52:58 -- common/autotest_common.sh@10 -- # set +x 00:16:15.796 malloc0 00:16:15.796 14:52:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.796 14:52:58 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:15.796 14:52:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:15.796 14:52:58 -- common/autotest_common.sh@10 -- # set +x 00:16:15.796 14:52:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:15.796 14:52:58 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:15.796 14:52:58 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:15.796 14:52:58 -- nvmf/common.sh@521 -- # config=() 00:16:15.796 14:52:58 -- nvmf/common.sh@521 -- # local subsystem config 00:16:15.796 14:52:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:15.796 14:52:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:15.796 { 00:16:15.796 "params": { 00:16:15.796 "name": "Nvme$subsystem", 00:16:15.796 "trtype": "$TEST_TRANSPORT", 00:16:15.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:15.796 "adrfam": "ipv4", 00:16:15.796 "trsvcid": "$NVMF_PORT", 00:16:15.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:15.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:15.796 "hdgst": ${hdgst:-false}, 00:16:15.796 "ddgst": ${ddgst:-false} 00:16:15.796 }, 00:16:15.796 "method": "bdev_nvme_attach_controller" 00:16:15.796 } 00:16:15.796 EOF 00:16:15.796 )") 00:16:15.796 14:52:58 -- nvmf/common.sh@543 -- # cat 00:16:15.796 14:52:58 -- nvmf/common.sh@545 -- # jq . 00:16:15.796 14:52:58 -- nvmf/common.sh@546 -- # IFS=, 00:16:15.796 14:52:58 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:15.796 "params": { 00:16:15.796 "name": "Nvme1", 00:16:15.796 "trtype": "tcp", 00:16:15.796 "traddr": "10.0.0.2", 00:16:15.796 "adrfam": "ipv4", 00:16:15.796 "trsvcid": "4420", 00:16:15.796 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:15.796 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:15.796 "hdgst": false, 00:16:15.796 "ddgst": false 00:16:15.796 }, 00:16:15.796 "method": "bdev_nvme_attach_controller" 00:16:15.796 }' 00:16:15.796 [2024-04-26 14:52:58.340160] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:15.796 [2024-04-26 14:52:58.340230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1045725 ] 00:16:15.796 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.796 [2024-04-26 14:52:58.404670] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.057 [2024-04-26 14:52:58.477079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.057 Running I/O for 10 seconds... 00:16:26.155 00:16:26.155 Latency(us) 00:16:26.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.155 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:26.155 Verification LBA range: start 0x0 length 0x1000 00:16:26.155 Nvme1n1 : 10.05 9217.84 72.01 0.00 0.00 13788.64 3017.39 44346.03 00:16:26.155 =================================================================================================================== 00:16:26.155 Total : 9217.84 72.01 0.00 0.00 13788.64 3017.39 44346.03 00:16:26.416 14:53:08 -- target/zcopy.sh@39 -- # perfpid=1047835 00:16:26.416 14:53:08 -- target/zcopy.sh@41 -- # xtrace_disable 00:16:26.416 14:53:08 -- common/autotest_common.sh@10 -- # set +x 00:16:26.416 14:53:08 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:26.416 14:53:08 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:26.416 14:53:08 -- nvmf/common.sh@521 -- # config=() 00:16:26.416 14:53:08 -- nvmf/common.sh@521 -- # local subsystem config 00:16:26.416 14:53:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:26.416 14:53:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:26.416 { 00:16:26.416 "params": { 00:16:26.416 "name": "Nvme$subsystem", 00:16:26.416 "trtype": "$TEST_TRANSPORT", 00:16:26.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:26.416 "adrfam": "ipv4", 00:16:26.416 "trsvcid": "$NVMF_PORT", 00:16:26.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:26.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:26.416 "hdgst": ${hdgst:-false}, 00:16:26.416 "ddgst": ${ddgst:-false} 00:16:26.416 }, 00:16:26.416 "method": "bdev_nvme_attach_controller" 00:16:26.416 } 00:16:26.416 EOF 00:16:26.416 )") 00:16:26.416 14:53:08 -- nvmf/common.sh@543 -- # cat 00:16:26.416 [2024-04-26 14:53:08.914410] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.416 [2024-04-26 14:53:08.914442] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.416 14:53:08 -- nvmf/common.sh@545 -- # jq . 00:16:26.416 14:53:08 -- nvmf/common.sh@546 -- # IFS=, 00:16:26.417 14:53:08 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:26.417 "params": { 00:16:26.417 "name": "Nvme1", 00:16:26.417 "trtype": "tcp", 00:16:26.417 "traddr": "10.0.0.2", 00:16:26.417 "adrfam": "ipv4", 00:16:26.417 "trsvcid": "4420", 00:16:26.417 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.417 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:26.417 "hdgst": false, 00:16:26.417 "ddgst": false 00:16:26.417 }, 00:16:26.417 "method": "bdev_nvme_attach_controller" 00:16:26.417 }' 00:16:26.417 [2024-04-26 14:53:08.926410] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.417 [2024-04-26 14:53:08.926419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.417 [2024-04-26 14:53:08.938440] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.417 [2024-04-26 14:53:08.938448] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.417 [2024-04-26 14:53:08.950469] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.417 [2024-04-26 14:53:08.950477] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.417 [2024-04-26 14:53:08.962144] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:26.417 [2024-04-26 14:53:08.962197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1047835 ] 00:16:26.417 [2024-04-26 14:53:08.962500] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.417 [2024-04-26 14:53:08.962509] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.417 [2024-04-26 14:53:08.974528] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.417 [2024-04-26 14:53:08.974536] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.417 [2024-04-26 14:53:08.986559] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.417 [2024-04-26 14:53:08.986567] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.417 EAL: No free 2048 kB hugepages reported on node 1 00:16:26.417 [2024-04-26 14:53:08.998590] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.417 [2024-04-26 14:53:08.998598] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.417 [2024-04-26 14:53:09.010622] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.417 [2024-04-26 14:53:09.010630] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.417 [2024-04-26 14:53:09.022188] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.417 [2024-04-26 14:53:09.022651] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.417 [2024-04-26 14:53:09.022665] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.417 [2024-04-26 14:53:09.034683] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.417 [2024-04-26 14:53:09.034693] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.417 [2024-04-26 14:53:09.046715] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.417 [2024-04-26 14:53:09.046723] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.417 [2024-04-26 14:53:09.054736] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.417 [2024-04-26 14:53:09.054748] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.417 [2024-04-26 14:53:09.062757] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.417 [2024-04-26 14:53:09.062767] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.417 [2024-04-26 14:53:09.070777] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.417 [2024-04-26 14:53:09.070785] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.417 [2024-04-26 14:53:09.078796] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.417 [2024-04-26 14:53:09.078804] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.678 [2024-04-26 14:53:09.084554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.678 [2024-04-26 14:53:09.086818] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.678 [2024-04-26 14:53:09.086826] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.678 [2024-04-26 14:53:09.094841] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.678 [2024-04-26 14:53:09.094850] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.678 [2024-04-26 14:53:09.102871] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.678 [2024-04-26 14:53:09.102885] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.678 [2024-04-26 14:53:09.110887] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.678 [2024-04-26 14:53:09.110895] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.678 [2024-04-26 14:53:09.118901] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.678 [2024-04-26 14:53:09.118909] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.678 [2024-04-26 14:53:09.126920] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.678 [2024-04-26 14:53:09.126929] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.678 [2024-04-26 14:53:09.134942] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.678 [2024-04-26 14:53:09.134951] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.678 [2024-04-26 14:53:09.142962] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.678 [2024-04-26 14:53:09.142970] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.678 [2024-04-26 14:53:09.150983] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.678 [2024-04-26 14:53:09.150991] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.678 [2024-04-26 14:53:09.159013] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.678 [2024-04-26 14:53:09.159029] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.678 [2024-04-26 14:53:09.167027] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.678 [2024-04-26 14:53:09.167036] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.678 [2024-04-26 14:53:09.175048] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.678 [2024-04-26 14:53:09.175057] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.678 [2024-04-26 14:53:09.183071] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.678 [2024-04-26 14:53:09.183081] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.678 [2024-04-26 14:53:09.191099] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.678 [2024-04-26 14:53:09.191108] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.678 [2024-04-26 14:53:09.199109] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.678 [2024-04-26 14:53:09.199118] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.678 [2024-04-26 14:53:09.207129] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.678 [2024-04-26 14:53:09.207136] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.678 [2024-04-26 14:53:09.215150] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.678 [2024-04-26 14:53:09.215158] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.678 [2024-04-26 14:53:09.223171] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.678 [2024-04-26 14:53:09.223178] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.678 [2024-04-26 14:53:09.231191] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.678 [2024-04-26 14:53:09.231199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.678 [2024-04-26 14:53:09.239215] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.678 [2024-04-26 14:53:09.239224] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.678 [2024-04-26 14:53:09.247235] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.678 [2024-04-26 14:53:09.247242] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.678 [2024-04-26 14:53:09.255257] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.678 [2024-04-26 14:53:09.255264] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.678 [2024-04-26 14:53:09.263276] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.679 [2024-04-26 14:53:09.263283] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.679 [2024-04-26 14:53:09.271296] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.679 [2024-04-26 14:53:09.271303] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.679 [2024-04-26 14:53:09.279317] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.679 [2024-04-26 14:53:09.279325] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.679 [2024-04-26 14:53:09.287338] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.679 [2024-04-26 14:53:09.287346] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.679 [2024-04-26 14:53:09.295363] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.679 [2024-04-26 14:53:09.295371] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.679 [2024-04-26 14:53:09.303379] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.679 [2024-04-26 14:53:09.303387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.679 [2024-04-26 14:53:09.311399] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.679 [2024-04-26 14:53:09.311406] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.679 [2024-04-26 14:53:09.319418] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.679 [2024-04-26 14:53:09.319426] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.679 [2024-04-26 14:53:09.327439] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.679 [2024-04-26 14:53:09.327447] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.679 [2024-04-26 14:53:09.336264] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.679 [2024-04-26 14:53:09.336278] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.940 [2024-04-26 14:53:09.343482] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.940 [2024-04-26 14:53:09.343493] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.940 Running I/O for 5 seconds... 00:16:26.940 [2024-04-26 14:53:09.351502] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.940 [2024-04-26 14:53:09.351510] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.940 [2024-04-26 14:53:09.362702] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.940 [2024-04-26 14:53:09.362719] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.940 [2024-04-26 14:53:09.369598] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.940 [2024-04-26 14:53:09.369613] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.940 [2024-04-26 14:53:09.378560] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.941 [2024-04-26 14:53:09.378575] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.941 [2024-04-26 14:53:09.387566] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.941 [2024-04-26 14:53:09.387581] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.941 [2024-04-26 14:53:09.396364] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.941 [2024-04-26 14:53:09.396379] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.941 [2024-04-26 14:53:09.405615] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.941 [2024-04-26 14:53:09.405630] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.941 [2024-04-26 14:53:09.414083] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.941 [2024-04-26 14:53:09.414098] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.941 [2024-04-26 14:53:09.422563] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.941 [2024-04-26 14:53:09.422579] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.941 [2024-04-26 14:53:09.431432] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.941 [2024-04-26 14:53:09.431447] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.941 [2024-04-26 14:53:09.440214] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.941 [2024-04-26 14:53:09.440229] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.941 [2024-04-26 14:53:09.449228] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.941 [2024-04-26 14:53:09.449242] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.941 [2024-04-26 14:53:09.457860] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.941 [2024-04-26 14:53:09.457874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.941 [2024-04-26 14:53:09.466471] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.941 [2024-04-26 14:53:09.466486] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.941 [2024-04-26 14:53:09.475474] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.941 [2024-04-26 14:53:09.475488] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.941 [2024-04-26 14:53:09.484581] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.941 [2024-04-26 14:53:09.484595] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.941 [2024-04-26 14:53:09.493696] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.941 [2024-04-26 14:53:09.493714] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.941 [2024-04-26 14:53:09.502230] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.941 [2024-04-26 14:53:09.502245] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.941 [2024-04-26 14:53:09.511290] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.941 [2024-04-26 14:53:09.511306] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.941 [2024-04-26 14:53:09.520296] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.941 [2024-04-26 14:53:09.520311] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.941 [2024-04-26 14:53:09.529625] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.941 [2024-04-26 14:53:09.529640] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.941 [2024-04-26 14:53:09.537704] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.941 [2024-04-26 14:53:09.537719] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.941 [2024-04-26 14:53:09.546495] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.941 [2024-04-26 14:53:09.546510] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.941 [2024-04-26 14:53:09.555434] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.941 [2024-04-26 14:53:09.555449] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.941 [2024-04-26 14:53:09.564312] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.941 [2024-04-26 14:53:09.564327] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.941 [2024-04-26 14:53:09.573206] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.941 [2024-04-26 14:53:09.573221] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.941 [2024-04-26 14:53:09.581824] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.941 [2024-04-26 14:53:09.581843] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.941 [2024-04-26 14:53:09.590757] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.941 [2024-04-26 14:53:09.590772] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.941 [2024-04-26 14:53:09.599714] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.941 [2024-04-26 14:53:09.599728] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.202 [2024-04-26 14:53:09.608732] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.202 [2024-04-26 14:53:09.608747] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.202 [2024-04-26 14:53:09.617780] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.202 [2024-04-26 14:53:09.617794] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.202 [2024-04-26 14:53:09.626671] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.202 [2024-04-26 14:53:09.626686] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.202 [2024-04-26 14:53:09.635767] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.202 [2024-04-26 14:53:09.635782] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.202 [2024-04-26 14:53:09.644997] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.202 [2024-04-26 14:53:09.645012] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.203 [2024-04-26 14:53:09.653986] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.203 [2024-04-26 14:53:09.654001] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.203 [2024-04-26 14:53:09.662740] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.203 [2024-04-26 14:53:09.662757] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.203 [2024-04-26 14:53:09.671775] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.203 [2024-04-26 14:53:09.671789] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.203 [2024-04-26 14:53:09.680430] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.203 [2024-04-26 14:53:09.680444] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.203 [2024-04-26 14:53:09.689089] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.203 [2024-04-26 14:53:09.689104] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.203 [2024-04-26 14:53:09.697805] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.203 [2024-04-26 14:53:09.697819] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.203 [2024-04-26 14:53:09.706775] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.203 [2024-04-26 14:53:09.706789] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.203 [2024-04-26 14:53:09.714970] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.203 [2024-04-26 14:53:09.714984] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.203 [2024-04-26 14:53:09.723731] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.203 [2024-04-26 14:53:09.723746] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.203 [2024-04-26 14:53:09.732860] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.203 [2024-04-26 14:53:09.732875] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.203 [2024-04-26 14:53:09.742078] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.203 [2024-04-26 14:53:09.742093] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.203 [2024-04-26 14:53:09.751027] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.203 [2024-04-26 14:53:09.751041] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.203 [2024-04-26 14:53:09.760085] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.203 [2024-04-26 14:53:09.760101] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.203 [2024-04-26 14:53:09.769430] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.203 [2024-04-26 14:53:09.769444] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.203 [2024-04-26 14:53:09.778467] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.203 [2024-04-26 14:53:09.778482] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.203 [2024-04-26 14:53:09.787593] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.203 [2024-04-26 14:53:09.787608] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.203 [2024-04-26 14:53:09.796404] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.203 [2024-04-26 14:53:09.796420] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.203 [2024-04-26 14:53:09.805587] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.203 [2024-04-26 14:53:09.805601] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.203 [2024-04-26 14:53:09.814385] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.203 [2024-04-26 14:53:09.814399] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.203 [2024-04-26 14:53:09.822785] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.203 [2024-04-26 14:53:09.822800] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.203 [2024-04-26 14:53:09.830860] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.203 [2024-04-26 14:53:09.830879] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.203 [2024-04-26 14:53:09.839826] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.203 [2024-04-26 14:53:09.839847] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.203 [2024-04-26 14:53:09.848226] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.203 [2024-04-26 14:53:09.848241] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.203 [2024-04-26 14:53:09.857185] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.203 [2024-04-26 14:53:09.857200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.203 [2024-04-26 14:53:09.866277] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.203 [2024-04-26 14:53:09.866291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.463 [2024-04-26 14:53:09.875455] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.463 [2024-04-26 14:53:09.875470] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.463 [2024-04-26 14:53:09.884745] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.463 [2024-04-26 14:53:09.884761] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.463 [2024-04-26 14:53:09.893201] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.463 [2024-04-26 14:53:09.893216] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.463 [2024-04-26 14:53:09.902167] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.463 [2024-04-26 14:53:09.902182] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.463 [2024-04-26 14:53:09.911072] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.463 [2024-04-26 14:53:09.911087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.463 [2024-04-26 14:53:09.920180] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.463 [2024-04-26 14:53:09.920194] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.463 [2024-04-26 14:53:09.928800] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.463 [2024-04-26 14:53:09.928815] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.463 [2024-04-26 14:53:09.937614] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.463 [2024-04-26 14:53:09.937629] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.463 [2024-04-26 14:53:09.946492] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.463 [2024-04-26 14:53:09.946507] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.463 [2024-04-26 14:53:09.955581] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.463 [2024-04-26 14:53:09.955595] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.463 [2024-04-26 14:53:09.963530] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.463 [2024-04-26 14:53:09.963545] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.463 [2024-04-26 14:53:09.972699] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.463 [2024-04-26 14:53:09.972714] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.463 [2024-04-26 14:53:09.981626] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.463 [2024-04-26 14:53:09.981640] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.463 [2024-04-26 14:53:09.990957] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.463 [2024-04-26 14:53:09.990972] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.463 [2024-04-26 14:53:09.998908] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.463 [2024-04-26 14:53:09.998926] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.463 [2024-04-26 14:53:10.007785] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.463 [2024-04-26 14:53:10.007800] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.463 [2024-04-26 14:53:10.016328] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.463 [2024-04-26 14:53:10.016343] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.463 [2024-04-26 14:53:10.025052] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.463 [2024-04-26 14:53:10.025067] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.463 [2024-04-26 14:53:10.033613] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.463 [2024-04-26 14:53:10.033628] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.463 [2024-04-26 14:53:10.042469] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.463 [2024-04-26 14:53:10.042484] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.463 [2024-04-26 14:53:10.051643] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.463 [2024-04-26 14:53:10.051657] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.463 [2024-04-26 14:53:10.060142] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.463 [2024-04-26 14:53:10.060157] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.463 [2024-04-26 14:53:10.068597] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.463 [2024-04-26 14:53:10.068612] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.463 [2024-04-26 14:53:10.077891] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.463 [2024-04-26 14:53:10.077906] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.463 [2024-04-26 14:53:10.086525] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.463 [2024-04-26 14:53:10.086540] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.463 [2024-04-26 14:53:10.095131] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.463 [2024-04-26 14:53:10.095146] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.463 [2024-04-26 14:53:10.104367] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.464 [2024-04-26 14:53:10.104381] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.464 [2024-04-26 14:53:10.113664] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.464 [2024-04-26 14:53:10.113678] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.464 [2024-04-26 14:53:10.122286] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.464 [2024-04-26 14:53:10.122301] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.724 [2024-04-26 14:53:10.131528] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.724 [2024-04-26 14:53:10.131543] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.724 [2024-04-26 14:53:10.140007] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.724 [2024-04-26 14:53:10.140021] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.724 [2024-04-26 14:53:10.148675] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.724 [2024-04-26 14:53:10.148688] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.725 [2024-04-26 14:53:10.158087] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.725 [2024-04-26 14:53:10.158102] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.725 [2024-04-26 14:53:10.167054] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.725 [2024-04-26 14:53:10.167068] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.725 [2024-04-26 14:53:10.175895] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.725 [2024-04-26 14:53:10.175910] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.725 [2024-04-26 14:53:10.184775] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.725 [2024-04-26 14:53:10.184790] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.725 [2024-04-26 14:53:10.193575] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.725 [2024-04-26 14:53:10.193590] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.725 [2024-04-26 14:53:10.202751] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.725 [2024-04-26 14:53:10.202766] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.725 [2024-04-26 14:53:10.211120] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.725 [2024-04-26 14:53:10.211134] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.725 [2024-04-26 14:53:10.220008] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.725 [2024-04-26 14:53:10.220022] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.725 [2024-04-26 14:53:10.229033] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.725 [2024-04-26 14:53:10.229048] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.725 [2024-04-26 14:53:10.238207] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.725 [2024-04-26 14:53:10.238222] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.725 [2024-04-26 14:53:10.247067] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.725 [2024-04-26 14:53:10.247082] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.725 [2024-04-26 14:53:10.255335] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.725 [2024-04-26 14:53:10.255350] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.725 [2024-04-26 14:53:10.264005] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.725 [2024-04-26 14:53:10.264019] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.725 [2024-04-26 14:53:10.273098] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.725 [2024-04-26 14:53:10.273113] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.725 [2024-04-26 14:53:10.282187] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.725 [2024-04-26 14:53:10.282202] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.725 [2024-04-26 14:53:10.290899] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.725 [2024-04-26 14:53:10.290913] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.725 [2024-04-26 14:53:10.299704] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.725 [2024-04-26 14:53:10.299719] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.725 [2024-04-26 14:53:10.309011] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.725 [2024-04-26 14:53:10.309025] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.725 [2024-04-26 14:53:10.317790] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.725 [2024-04-26 14:53:10.317805] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.725 [2024-04-26 14:53:10.326441] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.725 [2024-04-26 14:53:10.326456] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.725 [2024-04-26 14:53:10.335107] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.725 [2024-04-26 14:53:10.335121] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.725 [2024-04-26 14:53:10.343805] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.725 [2024-04-26 14:53:10.343820] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.725 [2024-04-26 14:53:10.351857] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.725 [2024-04-26 14:53:10.351871] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.725 [2024-04-26 14:53:10.360705] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.725 [2024-04-26 14:53:10.360720] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.725 [2024-04-26 14:53:10.369649] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.725 [2024-04-26 14:53:10.369664] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.725 [2024-04-26 14:53:10.378702] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.725 [2024-04-26 14:53:10.378716] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.725 [2024-04-26 14:53:10.387659] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.725 [2024-04-26 14:53:10.387673] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.396236] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.396252] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.405391] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.405406] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.413263] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.413278] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.422448] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.422463] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.431256] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.431271] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.440505] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.440519] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.448978] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.448993] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.457634] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.457649] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.466605] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.466620] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.475216] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.475231] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.483949] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.483964] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.492898] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.492913] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.501621] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.501636] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.510704] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.510719] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.519882] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.519897] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.528529] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.528543] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.538099] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.538113] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.546607] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.546622] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.555710] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.555724] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.564612] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.564627] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.573149] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.573164] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.581831] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.581850] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.590912] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.590927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.598982] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.598997] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.607951] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.607965] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.616624] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.616638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.625313] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.625328] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.634426] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.634440] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.642406] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.642420] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.987 [2024-04-26 14:53:10.651222] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.987 [2024-04-26 14:53:10.651236] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.660306] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.660321] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.669491] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.669505] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.678722] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.678737] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.687577] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.687592] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.696466] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.696480] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.705251] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.705266] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.714119] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.714134] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.723401] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.723415] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.732216] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.732230] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.741453] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.741468] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.750503] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.750517] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.759413] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.759427] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.767902] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.767917] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.777137] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.777151] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.785762] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.785776] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.794702] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.794717] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.803805] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.803820] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.812417] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.812432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.821058] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.821072] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.829835] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.829859] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.838848] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.838862] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.847873] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.847888] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.856520] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.856535] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.865167] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.865182] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.873901] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.873916] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.882513] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.882528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.891703] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.891718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.900294] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.900308] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.249 [2024-04-26 14:53:10.908959] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.249 [2024-04-26 14:53:10.908973] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:10.917708] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:10.917723] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:10.926552] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:10.926567] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:10.935704] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:10.935718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:10.944216] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:10.944231] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:10.953179] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:10.953193] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:10.962427] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:10.962442] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:10.971018] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:10.971033] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:10.979726] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:10.979740] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:10.988713] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:10.988727] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:10.997943] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:10.997960] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:11.006763] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:11.006778] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:11.015236] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:11.015250] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:11.024077] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:11.024092] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:11.032939] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:11.032953] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:11.042008] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:11.042022] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:11.050692] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:11.050706] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:11.059545] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:11.059560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:11.067560] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:11.067574] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:11.076432] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:11.076447] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:11.085282] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:11.085297] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:11.094135] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:11.094150] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:11.102706] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:11.102721] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:11.111371] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:11.111385] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:11.120182] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:11.120197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:11.129273] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:11.129287] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:11.137993] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:11.138007] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:11.146522] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:11.146536] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:11.155624] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:11.155638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:11.164510] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:11.164528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.511 [2024-04-26 14:53:11.173111] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.511 [2024-04-26 14:53:11.173125] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.773 [2024-04-26 14:53:11.182037] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.773 [2024-04-26 14:53:11.182051] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.773 [2024-04-26 14:53:11.190822] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.773 [2024-04-26 14:53:11.190835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.773 [2024-04-26 14:53:11.199351] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.773 [2024-04-26 14:53:11.199365] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.773 [2024-04-26 14:53:11.208266] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.773 [2024-04-26 14:53:11.208281] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.773 [2024-04-26 14:53:11.216852] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.773 [2024-04-26 14:53:11.216867] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.773 [2024-04-26 14:53:11.225999] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.773 [2024-04-26 14:53:11.226014] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.773 [2024-04-26 14:53:11.234520] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.773 [2024-04-26 14:53:11.234534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.773 [2024-04-26 14:53:11.243191] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.773 [2024-04-26 14:53:11.243205] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.773 [2024-04-26 14:53:11.252183] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.773 [2024-04-26 14:53:11.252197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.773 [2024-04-26 14:53:11.261234] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.774 [2024-04-26 14:53:11.261248] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.774 [2024-04-26 14:53:11.270423] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.774 [2024-04-26 14:53:11.270438] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.774 [2024-04-26 14:53:11.279207] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.774 [2024-04-26 14:53:11.279221] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.774 [2024-04-26 14:53:11.288467] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.774 [2024-04-26 14:53:11.288482] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.774 [2024-04-26 14:53:11.296454] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.774 [2024-04-26 14:53:11.296468] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.774 [2024-04-26 14:53:11.305472] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.774 [2024-04-26 14:53:11.305487] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.774 [2024-04-26 14:53:11.314109] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.774 [2024-04-26 14:53:11.314123] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.774 [2024-04-26 14:53:11.322818] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.774 [2024-04-26 14:53:11.322832] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.774 [2024-04-26 14:53:11.332108] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.774 [2024-04-26 14:53:11.332126] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.774 [2024-04-26 14:53:11.341283] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.774 [2024-04-26 14:53:11.341298] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.774 [2024-04-26 14:53:11.350426] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.774 [2024-04-26 14:53:11.350440] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.774 [2024-04-26 14:53:11.359332] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.774 [2024-04-26 14:53:11.359347] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.774 [2024-04-26 14:53:11.367844] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.774 [2024-04-26 14:53:11.367858] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.774 [2024-04-26 14:53:11.376310] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.774 [2024-04-26 14:53:11.376325] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.774 [2024-04-26 14:53:11.385446] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.774 [2024-04-26 14:53:11.385460] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.774 [2024-04-26 14:53:11.394114] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.774 [2024-04-26 14:53:11.394128] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.774 [2024-04-26 14:53:11.402764] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.774 [2024-04-26 14:53:11.402779] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.774 [2024-04-26 14:53:11.410897] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.774 [2024-04-26 14:53:11.410911] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.774 [2024-04-26 14:53:11.419903] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.774 [2024-04-26 14:53:11.419917] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.774 [2024-04-26 14:53:11.428599] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.774 [2024-04-26 14:53:11.428613] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.774 [2024-04-26 14:53:11.438009] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.774 [2024-04-26 14:53:11.438023] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.447159] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.447174] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.455622] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.455636] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.463959] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.463973] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.472264] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.472278] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.481413] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.481428] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.490512] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.490526] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.499338] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.499353] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.508657] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.508671] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.517537] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.517551] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.526825] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.526846] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.535392] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.535407] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.544151] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.544166] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.553058] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.553072] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.561658] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.561672] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.570239] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.570253] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.578832] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.578852] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.587886] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.587900] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.596950] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.596964] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.605866] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.605880] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.614378] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.614392] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.623695] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.623709] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.631678] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.631692] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.640612] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.640627] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.649587] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.649601] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.658111] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.658125] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.667092] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.667107] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.675725] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.675739] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.684810] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.684824] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.035 [2024-04-26 14:53:11.693635] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.035 [2024-04-26 14:53:11.693649] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.702920] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.702935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.711932] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.711946] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.720879] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.720893] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.729593] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.729608] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.738772] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.738786] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.747767] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.747781] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.755868] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.755882] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.764792] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.764806] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.773853] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.773867] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.782861] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.782875] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.792167] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.792181] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.800725] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.800739] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.809563] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.809578] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.818022] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.818037] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.826888] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.826903] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.835634] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.835648] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.843742] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.843756] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.853264] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.853279] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.861357] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.861371] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.870245] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.870259] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.879050] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.879065] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.888143] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.888159] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.896795] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.896809] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.905580] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.905595] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.914372] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.914387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.922726] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.922740] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.931676] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.931691] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.940224] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.940239] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.948964] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.948979] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.297 [2024-04-26 14:53:11.957888] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.297 [2024-04-26 14:53:11.957902] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:11.966423] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:11.966438] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:11.975426] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:11.975441] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:11.983481] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:11.983495] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:11.992433] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:11.992448] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:12.001137] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:12.001152] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:12.010246] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:12.010261] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:12.018979] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:12.018995] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:12.027236] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:12.027251] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:12.036058] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:12.036073] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:12.044633] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:12.044648] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:12.053930] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:12.053944] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:12.061835] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:12.061854] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:12.071194] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:12.071208] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:12.079671] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:12.079686] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:12.088439] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:12.088454] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:12.097215] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:12.097229] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:12.105643] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:12.105658] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:12.114861] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:12.114876] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:12.123606] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:12.123621] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:12.133036] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:12.133050] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:12.141705] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:12.141720] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:12.150242] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:12.150257] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:12.159095] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:12.159113] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:12.168485] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:12.168500] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:12.177276] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:12.177291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:12.185973] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:12.185987] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:12.195073] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:12.195087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:12.203176] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:12.203190] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:12.212003] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:12.212018] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.560 [2024-04-26 14:53:12.221184] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.560 [2024-04-26 14:53:12.221199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.229540] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.229555] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.238417] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.238432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.247684] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.247699] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.256435] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.256449] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.265361] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.265376] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.274413] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.274428] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.283207] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.283221] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.292090] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.292105] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.300771] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.300785] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.309265] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.309280] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.318223] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.318238] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.327265] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.327283] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.336173] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.336187] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.345009] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.345024] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.353964] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.353978] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.362403] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.362418] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.371648] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.371663] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.380311] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.380326] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.389044] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.389060] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.398164] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.398179] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.406793] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.406808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.415162] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.415177] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.423493] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.423508] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.431946] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.431960] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.440517] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.440532] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.449240] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.449256] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.457825] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.457844] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.821 [2024-04-26 14:53:12.466488] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.821 [2024-04-26 14:53:12.466503] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.822 [2024-04-26 14:53:12.475350] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.822 [2024-04-26 14:53:12.475364] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.822 [2024-04-26 14:53:12.483721] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.822 [2024-04-26 14:53:12.483736] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.492606] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.492624] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.501209] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.501224] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.510194] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.510208] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.519259] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.519273] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.527468] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.527483] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.536446] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.536460] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.545124] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.545139] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.553629] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.553644] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.563015] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.563030] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.571691] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.571706] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.580996] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.581010] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.588978] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.588993] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.597778] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.597792] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.606627] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.606641] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.615419] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.615434] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.624033] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.624047] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.632517] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.632531] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.641398] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.641413] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.649969] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.649983] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.658358] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.658374] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.667375] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.667390] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.676001] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.676016] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.684989] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.685004] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.693908] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.693922] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.702770] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.702784] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.711464] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.711478] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.720240] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.720254] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.728859] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.728873] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.737259] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.737273] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.083 [2024-04-26 14:53:12.746066] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.083 [2024-04-26 14:53:12.746080] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.754591] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.754606] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.763379] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.763393] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.772371] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.772386] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.780822] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.780840] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.789721] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.789735] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.798856] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.798870] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.808040] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.808055] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.816828] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.816847] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.825551] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.825565] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.833830] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.833849] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.842737] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.842752] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.851235] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.851250] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.859998] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.860013] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.867989] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.868004] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.877060] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.877075] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.886233] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.886248] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.894841] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.894856] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.903419] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.903433] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.915979] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.915994] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.924124] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.924139] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.933202] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.933216] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.941967] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.941982] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.950356] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.950371] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.958853] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.958867] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.967332] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.967346] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.975937] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.975951] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.984388] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.984401] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:12.993661] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:12.993675] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.345 [2024-04-26 14:53:13.002763] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.345 [2024-04-26 14:53:13.002778] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.011731] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.011747] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.020718] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.020732] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.029881] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.029895] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.038421] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.038435] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.046949] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.046964] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.055626] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.055641] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.064640] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.064655] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.073157] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.073172] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.081918] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.081933] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.090418] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.090433] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.099451] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.099465] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.108122] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.108137] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.117308] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.117322] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.125896] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.125911] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.134805] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.134820] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.143272] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.143287] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.152483] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.152497] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.161051] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.161066] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.169927] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.169942] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.178784] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.178798] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.187312] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.187326] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.196285] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.196299] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.205136] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.205150] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.213556] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.213570] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.222535] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.222550] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.231253] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.231268] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.240791] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.240806] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.248804] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.248819] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.257899] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.257914] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.607 [2024-04-26 14:53:13.266445] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.607 [2024-04-26 14:53:13.266460] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.867 [2024-04-26 14:53:13.275033] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.867 [2024-04-26 14:53:13.275048] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.867 [2024-04-26 14:53:13.283581] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.867 [2024-04-26 14:53:13.283595] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.867 [2024-04-26 14:53:13.292709] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.867 [2024-04-26 14:53:13.292723] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.867 [2024-04-26 14:53:13.300853] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.867 [2024-04-26 14:53:13.300867] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.867 [2024-04-26 14:53:13.309337] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.867 [2024-04-26 14:53:13.309352] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.867 [2024-04-26 14:53:13.318209] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.867 [2024-04-26 14:53:13.318223] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.867 [2024-04-26 14:53:13.327225] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.867 [2024-04-26 14:53:13.327240] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 [2024-04-26 14:53:13.336052] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-04-26 14:53:13.336066] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 [2024-04-26 14:53:13.344302] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-04-26 14:53:13.344316] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 [2024-04-26 14:53:13.353076] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-04-26 14:53:13.353090] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 [2024-04-26 14:53:13.361994] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-04-26 14:53:13.362009] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 [2024-04-26 14:53:13.370586] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-04-26 14:53:13.370601] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 [2024-04-26 14:53:13.379411] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-04-26 14:53:13.379426] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 [2024-04-26 14:53:13.388113] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-04-26 14:53:13.388128] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 [2024-04-26 14:53:13.396968] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-04-26 14:53:13.396982] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 [2024-04-26 14:53:13.405675] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-04-26 14:53:13.405690] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 [2024-04-26 14:53:13.414981] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-04-26 14:53:13.414996] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 [2024-04-26 14:53:13.423689] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-04-26 14:53:13.423703] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 [2024-04-26 14:53:13.432374] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-04-26 14:53:13.432390] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 [2024-04-26 14:53:13.441488] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-04-26 14:53:13.441503] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 [2024-04-26 14:53:13.450237] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-04-26 14:53:13.450252] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 [2024-04-26 14:53:13.459117] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-04-26 14:53:13.459131] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 [2024-04-26 14:53:13.467299] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-04-26 14:53:13.467313] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 [2024-04-26 14:53:13.476072] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-04-26 14:53:13.476086] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 [2024-04-26 14:53:13.485182] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-04-26 14:53:13.485199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 [2024-04-26 14:53:13.493915] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-04-26 14:53:13.493929] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 [2024-04-26 14:53:13.503327] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-04-26 14:53:13.503342] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 [2024-04-26 14:53:13.511448] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-04-26 14:53:13.511462] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 [2024-04-26 14:53:13.520793] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-04-26 14:53:13.520808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 [2024-04-26 14:53:13.529298] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-04-26 14:53:13.529312] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.538504] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.129 [2024-04-26 14:53:13.538519] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.547016] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.129 [2024-04-26 14:53:13.547031] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.556048] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.129 [2024-04-26 14:53:13.556062] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.564833] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.129 [2024-04-26 14:53:13.564852] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.573923] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.129 [2024-04-26 14:53:13.573938] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.582408] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.129 [2024-04-26 14:53:13.582423] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.591502] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.129 [2024-04-26 14:53:13.591517] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.600410] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.129 [2024-04-26 14:53:13.600425] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.609012] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.129 [2024-04-26 14:53:13.609027] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.618157] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.129 [2024-04-26 14:53:13.618172] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.627269] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.129 [2024-04-26 14:53:13.627284] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.636488] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.129 [2024-04-26 14:53:13.636502] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.645170] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.129 [2024-04-26 14:53:13.645184] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.654257] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.129 [2024-04-26 14:53:13.654275] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.662850] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.129 [2024-04-26 14:53:13.662864] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.671775] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.129 [2024-04-26 14:53:13.671791] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.681010] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.129 [2024-04-26 14:53:13.681025] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.689515] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.129 [2024-04-26 14:53:13.689529] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.697637] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.129 [2024-04-26 14:53:13.697651] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.706353] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.129 [2024-04-26 14:53:13.706368] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.715173] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.129 [2024-04-26 14:53:13.715188] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.723663] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.129 [2024-04-26 14:53:13.723678] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.732660] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.129 [2024-04-26 14:53:13.732676] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.741266] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.129 [2024-04-26 14:53:13.741280] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.750067] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.129 [2024-04-26 14:53:13.750083] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.759014] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.129 [2024-04-26 14:53:13.759030] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.767775] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.129 [2024-04-26 14:53:13.767790] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.129 [2024-04-26 14:53:13.777043] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.130 [2024-04-26 14:53:13.777058] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.130 [2024-04-26 14:53:13.786332] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.130 [2024-04-26 14:53:13.786347] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.390 [2024-04-26 14:53:13.794736] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.390 [2024-04-26 14:53:13.794751] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.390 [2024-04-26 14:53:13.803899] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.390 [2024-04-26 14:53:13.803914] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.390 [2024-04-26 14:53:13.812448] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.390 [2024-04-26 14:53:13.812463] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.390 [2024-04-26 14:53:13.821352] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.390 [2024-04-26 14:53:13.821369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.390 [2024-04-26 14:53:13.830404] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.390 [2024-04-26 14:53:13.830419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.390 [2024-04-26 14:53:13.839436] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.390 [2024-04-26 14:53:13.839451] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.390 [2024-04-26 14:53:13.848029] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.390 [2024-04-26 14:53:13.848045] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.390 [2024-04-26 14:53:13.856942] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.390 [2024-04-26 14:53:13.856957] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.390 [2024-04-26 14:53:13.866148] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.390 [2024-04-26 14:53:13.866163] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.390 [2024-04-26 14:53:13.874184] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.390 [2024-04-26 14:53:13.874199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.390 [2024-04-26 14:53:13.883651] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.390 [2024-04-26 14:53:13.883666] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.391 [2024-04-26 14:53:13.892917] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.391 [2024-04-26 14:53:13.892932] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.391 [2024-04-26 14:53:13.901414] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.391 [2024-04-26 14:53:13.901429] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.391 [2024-04-26 14:53:13.910465] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.391 [2024-04-26 14:53:13.910479] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.391 [2024-04-26 14:53:13.919942] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.391 [2024-04-26 14:53:13.919957] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.391 [2024-04-26 14:53:13.928767] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.391 [2024-04-26 14:53:13.928782] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.391 [2024-04-26 14:53:13.937699] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.391 [2024-04-26 14:53:13.937714] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.391 [2024-04-26 14:53:13.946555] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.391 [2024-04-26 14:53:13.946570] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.391 [2024-04-26 14:53:13.954515] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.391 [2024-04-26 14:53:13.954529] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.391 [2024-04-26 14:53:13.963129] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.391 [2024-04-26 14:53:13.963144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.391 [2024-04-26 14:53:13.971833] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.391 [2024-04-26 14:53:13.971853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.391 [2024-04-26 14:53:13.980899] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.391 [2024-04-26 14:53:13.980914] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.391 [2024-04-26 14:53:13.989541] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.391 [2024-04-26 14:53:13.989561] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.391 [2024-04-26 14:53:13.998059] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.391 [2024-04-26 14:53:13.998073] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.391 [2024-04-26 14:53:14.006948] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.391 [2024-04-26 14:53:14.006962] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.391 [2024-04-26 14:53:14.015624] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.391 [2024-04-26 14:53:14.015639] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.391 [2024-04-26 14:53:14.024283] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.391 [2024-04-26 14:53:14.024297] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.391 [2024-04-26 14:53:14.033094] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.391 [2024-04-26 14:53:14.033107] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.391 [2024-04-26 14:53:14.041808] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.391 [2024-04-26 14:53:14.041822] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.391 [2024-04-26 14:53:14.050922] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.391 [2024-04-26 14:53:14.050936] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.652 [2024-04-26 14:53:14.059377] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.652 [2024-04-26 14:53:14.059390] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.652 [2024-04-26 14:53:14.068013] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.652 [2024-04-26 14:53:14.068027] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.652 [2024-04-26 14:53:14.076875] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.652 [2024-04-26 14:53:14.076889] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.652 [2024-04-26 14:53:14.085454] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.652 [2024-04-26 14:53:14.085468] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.652 [2024-04-26 14:53:14.094317] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.652 [2024-04-26 14:53:14.094330] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.652 [2024-04-26 14:53:14.103407] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.652 [2024-04-26 14:53:14.103421] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.652 [2024-04-26 14:53:14.112231] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.652 [2024-04-26 14:53:14.112245] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.652 [2024-04-26 14:53:14.120218] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.652 [2024-04-26 14:53:14.120232] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.652 [2024-04-26 14:53:14.129123] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.652 [2024-04-26 14:53:14.129138] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.652 [2024-04-26 14:53:14.137752] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.652 [2024-04-26 14:53:14.137766] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.652 [2024-04-26 14:53:14.146990] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.652 [2024-04-26 14:53:14.147004] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.652 [2024-04-26 14:53:14.156276] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.652 [2024-04-26 14:53:14.156291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.652 [2024-04-26 14:53:14.164990] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.652 [2024-04-26 14:53:14.165005] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.652 [2024-04-26 14:53:14.173896] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.652 [2024-04-26 14:53:14.173910] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.652 [2024-04-26 14:53:14.182904] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.652 [2024-04-26 14:53:14.182927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.652 [2024-04-26 14:53:14.191671] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.652 [2024-04-26 14:53:14.191685] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.652 [2024-04-26 14:53:14.200545] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.652 [2024-04-26 14:53:14.200558] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.652 [2024-04-26 14:53:14.209201] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.652 [2024-04-26 14:53:14.209215] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.652 [2024-04-26 14:53:14.217660] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.652 [2024-04-26 14:53:14.217674] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.652 [2024-04-26 14:53:14.226750] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.652 [2024-04-26 14:53:14.226764] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.652 [2024-04-26 14:53:14.235158] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.652 [2024-04-26 14:53:14.235172] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.652 [2024-04-26 14:53:14.244219] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.652 [2024-04-26 14:53:14.244233] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.652 [2024-04-26 14:53:14.252219] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.653 [2024-04-26 14:53:14.252233] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.653 [2024-04-26 14:53:14.260477] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.653 [2024-04-26 14:53:14.260491] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.653 [2024-04-26 14:53:14.269391] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.653 [2024-04-26 14:53:14.269405] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.653 [2024-04-26 14:53:14.278169] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.653 [2024-04-26 14:53:14.278183] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.653 [2024-04-26 14:53:14.286933] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.653 [2024-04-26 14:53:14.286947] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.653 [2024-04-26 14:53:14.295489] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.653 [2024-04-26 14:53:14.295503] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.653 [2024-04-26 14:53:14.303516] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.653 [2024-04-26 14:53:14.303530] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.653 [2024-04-26 14:53:14.312383] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.653 [2024-04-26 14:53:14.312396] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.914 [2024-04-26 14:53:14.321304] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.914 [2024-04-26 14:53:14.321318] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.914 [2024-04-26 14:53:14.330480] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.914 [2024-04-26 14:53:14.330494] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.914 [2024-04-26 14:53:14.338963] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.914 [2024-04-26 14:53:14.338977] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.914 [2024-04-26 14:53:14.347982] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.914 [2024-04-26 14:53:14.347996] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.914 [2024-04-26 14:53:14.356319] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.914 [2024-04-26 14:53:14.356333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.914 [2024-04-26 14:53:14.362659] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.914 [2024-04-26 14:53:14.362672] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.914 00:16:31.914 Latency(us) 00:16:31.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.914 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:31.914 Nvme1n1 : 5.00 18930.26 147.89 0.00 0.00 6755.05 2539.52 20316.16 00:16:31.914 =================================================================================================================== 00:16:31.914 Total : 18930.26 147.89 0.00 0.00 6755.05 2539.52 20316.16 00:16:31.914 [2024-04-26 14:53:14.370679] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.914 [2024-04-26 14:53:14.370689] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.914 [2024-04-26 14:53:14.378698] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.914 [2024-04-26 14:53:14.378708] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.914 [2024-04-26 14:53:14.386721] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.914 [2024-04-26 14:53:14.386731] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.914 [2024-04-26 14:53:14.394742] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.914 [2024-04-26 14:53:14.394753] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.914 [2024-04-26 14:53:14.402762] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.914 [2024-04-26 14:53:14.402774] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.914 [2024-04-26 14:53:14.410783] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.914 [2024-04-26 14:53:14.410792] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.914 [2024-04-26 14:53:14.418802] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.914 [2024-04-26 14:53:14.418810] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.914 [2024-04-26 14:53:14.426823] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.914 [2024-04-26 14:53:14.426831] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.914 [2024-04-26 14:53:14.434844] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.914 [2024-04-26 14:53:14.434853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.914 [2024-04-26 14:53:14.442866] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.914 [2024-04-26 14:53:14.442873] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.914 [2024-04-26 14:53:14.450888] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.914 [2024-04-26 14:53:14.450896] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.915 [2024-04-26 14:53:14.458907] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.915 [2024-04-26 14:53:14.458916] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.915 [2024-04-26 14:53:14.466926] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.915 [2024-04-26 14:53:14.466933] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.915 [2024-04-26 14:53:14.474948] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.915 [2024-04-26 14:53:14.474957] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.915 [2024-04-26 14:53:14.482968] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.915 [2024-04-26 14:53:14.482975] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.915 [2024-04-26 14:53:14.490989] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.915 [2024-04-26 14:53:14.490996] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1047835) - No such process 00:16:31.915 14:53:14 -- target/zcopy.sh@49 -- # wait 1047835 00:16:31.915 14:53:14 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:31.915 14:53:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.915 14:53:14 -- common/autotest_common.sh@10 -- # set +x 00:16:31.915 14:53:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:31.915 14:53:14 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:31.915 14:53:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.915 14:53:14 -- common/autotest_common.sh@10 -- # set +x 00:16:31.915 delay0 00:16:31.915 14:53:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:31.915 14:53:14 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:31.915 14:53:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.915 14:53:14 -- common/autotest_common.sh@10 -- # set +x 00:16:31.915 14:53:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:31.915 14:53:14 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:31.915 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.176 [2024-04-26 14:53:14.616290] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:40.311 Initializing NVMe Controllers 00:16:40.311 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:40.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:40.311 Initialization complete. Launching workers. 00:16:40.311 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 244, failed: 30058 00:16:40.311 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 30185, failed to submit 117 00:16:40.311 success 30095, unsuccess 90, failed 0 00:16:40.311 14:53:21 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:40.311 14:53:21 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:40.311 14:53:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:40.311 14:53:21 -- nvmf/common.sh@117 -- # sync 00:16:40.311 14:53:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:40.311 14:53:21 -- nvmf/common.sh@120 -- # set +e 00:16:40.311 14:53:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:40.311 14:53:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:40.311 rmmod nvme_tcp 00:16:40.311 rmmod nvme_fabrics 00:16:40.311 rmmod nvme_keyring 00:16:40.311 14:53:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:40.311 14:53:21 -- nvmf/common.sh@124 -- # set -e 00:16:40.311 14:53:21 -- nvmf/common.sh@125 -- # return 0 00:16:40.311 14:53:21 -- nvmf/common.sh@478 -- # '[' -n 1045585 ']' 00:16:40.311 14:53:21 -- nvmf/common.sh@479 -- # killprocess 1045585 00:16:40.311 14:53:21 -- common/autotest_common.sh@936 -- # '[' -z 1045585 ']' 00:16:40.311 14:53:21 -- common/autotest_common.sh@940 -- # kill -0 1045585 00:16:40.311 14:53:21 -- common/autotest_common.sh@941 -- # uname 00:16:40.311 14:53:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:40.311 14:53:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1045585 00:16:40.311 14:53:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:40.311 14:53:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:40.311 14:53:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1045585' 00:16:40.311 killing process with pid 1045585 00:16:40.311 14:53:21 -- common/autotest_common.sh@955 -- # kill 1045585 00:16:40.311 14:53:21 -- common/autotest_common.sh@960 -- # wait 1045585 00:16:40.311 14:53:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:40.311 14:53:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:40.311 14:53:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:40.311 14:53:21 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:40.311 14:53:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:40.311 14:53:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.311 14:53:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.311 14:53:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.692 14:53:23 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:41.692 00:16:41.692 real 0m34.058s 00:16:41.692 user 0m45.696s 00:16:41.692 sys 0m11.419s 00:16:41.692 14:53:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:41.692 14:53:23 -- common/autotest_common.sh@10 -- # set +x 00:16:41.692 ************************************ 00:16:41.692 END TEST nvmf_zcopy 00:16:41.692 ************************************ 00:16:41.692 14:53:24 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:41.692 14:53:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:41.692 14:53:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:41.692 14:53:24 -- common/autotest_common.sh@10 -- # set +x 00:16:41.692 ************************************ 00:16:41.692 START TEST nvmf_nmic 00:16:41.692 ************************************ 00:16:41.692 14:53:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:41.692 * Looking for test storage... 00:16:41.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:41.692 14:53:24 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:41.692 14:53:24 -- nvmf/common.sh@7 -- # uname -s 00:16:41.692 14:53:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:41.692 14:53:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:41.692 14:53:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:41.692 14:53:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:41.692 14:53:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:41.692 14:53:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:41.692 14:53:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:41.692 14:53:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:41.692 14:53:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:41.692 14:53:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:41.692 14:53:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:41.692 14:53:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:41.692 14:53:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:41.692 14:53:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:41.692 14:53:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:41.692 14:53:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:41.692 14:53:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:41.692 14:53:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:41.692 14:53:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:41.692 14:53:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:41.692 14:53:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.693 14:53:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.693 14:53:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.693 14:53:24 -- paths/export.sh@5 -- # export PATH 00:16:41.693 14:53:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.693 14:53:24 -- nvmf/common.sh@47 -- # : 0 00:16:41.693 14:53:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:41.693 14:53:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:41.693 14:53:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:41.693 14:53:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:41.693 14:53:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:41.693 14:53:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:41.693 14:53:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:41.693 14:53:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:41.693 14:53:24 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:41.693 14:53:24 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:41.693 14:53:24 -- target/nmic.sh@14 -- # nvmftestinit 00:16:41.693 14:53:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:41.693 14:53:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:41.693 14:53:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:41.693 14:53:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:41.693 14:53:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:41.693 14:53:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.693 14:53:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.693 14:53:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.693 14:53:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:41.693 14:53:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:41.693 14:53:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:41.693 14:53:24 -- common/autotest_common.sh@10 -- # set +x 00:16:49.832 14:53:31 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:49.832 14:53:31 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:49.832 14:53:31 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:49.832 14:53:31 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:49.832 14:53:31 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:49.832 14:53:31 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:49.832 14:53:31 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:49.832 14:53:31 -- nvmf/common.sh@295 -- # net_devs=() 00:16:49.832 14:53:31 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:49.832 14:53:31 -- nvmf/common.sh@296 -- # e810=() 00:16:49.832 14:53:31 -- nvmf/common.sh@296 -- # local -ga e810 00:16:49.832 14:53:31 -- nvmf/common.sh@297 -- # x722=() 00:16:49.832 14:53:31 -- nvmf/common.sh@297 -- # local -ga x722 00:16:49.832 14:53:31 -- nvmf/common.sh@298 -- # mlx=() 00:16:49.832 14:53:31 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:49.832 14:53:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:49.832 14:53:31 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:49.832 14:53:31 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:49.832 14:53:31 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:49.832 14:53:31 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:49.832 14:53:31 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:49.832 14:53:31 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:49.832 14:53:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:49.832 14:53:31 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:49.832 14:53:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:49.832 14:53:31 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:49.832 14:53:31 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:49.832 14:53:31 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:49.832 14:53:31 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:49.832 14:53:31 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:49.832 14:53:31 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:49.832 14:53:31 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:49.832 14:53:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:49.832 14:53:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:49.832 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:49.832 14:53:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:49.832 14:53:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:49.832 14:53:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.832 14:53:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.832 14:53:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:49.832 14:53:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:49.832 14:53:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:49.832 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:49.832 14:53:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:49.832 14:53:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:49.832 14:53:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.832 14:53:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.832 14:53:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:49.832 14:53:31 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:49.832 14:53:31 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:49.832 14:53:31 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:49.832 14:53:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:49.832 14:53:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.832 14:53:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:49.832 14:53:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.832 14:53:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:49.832 Found net devices under 0000:31:00.0: cvl_0_0 00:16:49.832 14:53:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.832 14:53:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:49.832 14:53:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.833 14:53:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:49.833 14:53:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.833 14:53:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:49.833 Found net devices under 0000:31:00.1: cvl_0_1 00:16:49.833 14:53:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.833 14:53:31 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:49.833 14:53:31 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:49.833 14:53:31 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:49.833 14:53:31 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:49.833 14:53:31 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:49.833 14:53:31 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:49.833 14:53:31 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:49.833 14:53:31 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:49.833 14:53:31 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:49.833 14:53:31 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:49.833 14:53:31 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:49.833 14:53:31 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:49.833 14:53:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:49.833 14:53:31 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:49.833 14:53:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:49.833 14:53:31 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:49.833 14:53:31 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:49.833 14:53:31 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:49.833 14:53:31 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:49.833 14:53:31 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:49.833 14:53:31 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:49.833 14:53:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:49.833 14:53:31 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:49.833 14:53:31 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:49.833 14:53:31 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:49.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:49.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:16:49.833 00:16:49.833 --- 10.0.0.2 ping statistics --- 00:16:49.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.833 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:16:49.833 14:53:31 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:49.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:49.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:16:49.833 00:16:49.833 --- 10.0.0.1 ping statistics --- 00:16:49.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.833 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:16:49.833 14:53:31 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:49.833 14:53:31 -- nvmf/common.sh@411 -- # return 0 00:16:49.833 14:53:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:49.833 14:53:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:49.833 14:53:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:49.833 14:53:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:49.833 14:53:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:49.833 14:53:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:49.833 14:53:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:49.833 14:53:31 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:49.833 14:53:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:49.833 14:53:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:49.833 14:53:31 -- common/autotest_common.sh@10 -- # set +x 00:16:49.833 14:53:31 -- nvmf/common.sh@470 -- # nvmfpid=1054535 00:16:49.833 14:53:31 -- nvmf/common.sh@471 -- # waitforlisten 1054535 00:16:49.833 14:53:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:49.833 14:53:31 -- common/autotest_common.sh@817 -- # '[' -z 1054535 ']' 00:16:49.833 14:53:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.833 14:53:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:49.833 14:53:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.833 14:53:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:49.833 14:53:31 -- common/autotest_common.sh@10 -- # set +x 00:16:49.833 [2024-04-26 14:53:31.583231] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:49.833 [2024-04-26 14:53:31.583294] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.833 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.833 [2024-04-26 14:53:31.655577] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:49.833 [2024-04-26 14:53:31.730405] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.833 [2024-04-26 14:53:31.730447] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.833 [2024-04-26 14:53:31.730456] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.833 [2024-04-26 14:53:31.730467] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.833 [2024-04-26 14:53:31.730474] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.833 [2024-04-26 14:53:31.730638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.833 [2024-04-26 14:53:31.730751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:49.833 [2024-04-26 14:53:31.730884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.833 [2024-04-26 14:53:31.730884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:49.833 14:53:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:49.833 14:53:32 -- common/autotest_common.sh@850 -- # return 0 00:16:49.833 14:53:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:49.833 14:53:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:49.833 14:53:32 -- common/autotest_common.sh@10 -- # set +x 00:16:49.833 14:53:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.833 14:53:32 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:49.833 14:53:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:49.833 14:53:32 -- common/autotest_common.sh@10 -- # set +x 00:16:49.833 [2024-04-26 14:53:32.409435] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.833 14:53:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:49.833 14:53:32 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:49.833 14:53:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:49.833 14:53:32 -- common/autotest_common.sh@10 -- # set +x 00:16:49.833 Malloc0 00:16:49.833 14:53:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:49.833 14:53:32 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:49.833 14:53:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:49.833 14:53:32 -- common/autotest_common.sh@10 -- # set +x 00:16:49.833 14:53:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:49.833 14:53:32 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:49.833 14:53:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:49.833 14:53:32 -- common/autotest_common.sh@10 -- # set +x 00:16:49.833 14:53:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:49.833 14:53:32 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.833 14:53:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:49.833 14:53:32 -- common/autotest_common.sh@10 -- # set +x 00:16:49.833 [2024-04-26 14:53:32.468875] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.833 14:53:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:49.833 14:53:32 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:49.833 test case1: single bdev can't be used in multiple subsystems 00:16:49.833 14:53:32 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:49.833 14:53:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:49.833 14:53:32 -- common/autotest_common.sh@10 -- # set +x 00:16:49.834 14:53:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:49.834 14:53:32 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:49.834 14:53:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:49.834 14:53:32 -- common/autotest_common.sh@10 -- # set +x 00:16:50.101 14:53:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.101 14:53:32 -- target/nmic.sh@28 -- # nmic_status=0 00:16:50.101 14:53:32 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:50.101 14:53:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.101 14:53:32 -- common/autotest_common.sh@10 -- # set +x 00:16:50.101 [2024-04-26 14:53:32.504800] bdev.c:8005:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:50.101 [2024-04-26 14:53:32.504817] subsystem.c:1940:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:50.101 [2024-04-26 14:53:32.504825] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:50.101 request: 00:16:50.101 { 00:16:50.101 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:50.101 "namespace": { 00:16:50.101 "bdev_name": "Malloc0", 00:16:50.101 "no_auto_visible": false 00:16:50.101 }, 00:16:50.101 "method": "nvmf_subsystem_add_ns", 00:16:50.101 "req_id": 1 00:16:50.101 } 00:16:50.101 Got JSON-RPC error response 00:16:50.101 response: 00:16:50.101 { 00:16:50.101 "code": -32602, 00:16:50.101 "message": "Invalid parameters" 00:16:50.101 } 00:16:50.101 14:53:32 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:50.101 14:53:32 -- target/nmic.sh@29 -- # nmic_status=1 00:16:50.101 14:53:32 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:50.101 14:53:32 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:50.101 Adding namespace failed - expected result. 00:16:50.101 14:53:32 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:50.101 test case2: host connect to nvmf target in multiple paths 00:16:50.101 14:53:32 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:50.101 14:53:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.101 14:53:32 -- common/autotest_common.sh@10 -- # set +x 00:16:50.101 [2024-04-26 14:53:32.516932] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:50.101 14:53:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.101 14:53:32 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:51.483 14:53:33 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:52.868 14:53:35 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:52.868 14:53:35 -- common/autotest_common.sh@1184 -- # local i=0 00:16:52.868 14:53:35 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:52.868 14:53:35 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:52.868 14:53:35 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:55.412 14:53:37 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:55.412 14:53:37 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:55.412 14:53:37 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:55.412 14:53:37 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:55.412 14:53:37 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:55.412 14:53:37 -- common/autotest_common.sh@1194 -- # return 0 00:16:55.412 14:53:37 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:55.412 [global] 00:16:55.412 thread=1 00:16:55.412 invalidate=1 00:16:55.412 rw=write 00:16:55.412 time_based=1 00:16:55.412 runtime=1 00:16:55.412 ioengine=libaio 00:16:55.412 direct=1 00:16:55.412 bs=4096 00:16:55.412 iodepth=1 00:16:55.412 norandommap=0 00:16:55.412 numjobs=1 00:16:55.412 00:16:55.412 verify_dump=1 00:16:55.412 verify_backlog=512 00:16:55.412 verify_state_save=0 00:16:55.412 do_verify=1 00:16:55.412 verify=crc32c-intel 00:16:55.412 [job0] 00:16:55.412 filename=/dev/nvme0n1 00:16:55.412 Could not set queue depth (nvme0n1) 00:16:55.412 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:55.412 fio-3.35 00:16:55.412 Starting 1 thread 00:16:56.801 00:16:56.801 job0: (groupid=0, jobs=1): err= 0: pid=1055907: Fri Apr 26 14:53:39 2024 00:16:56.801 read: IOPS=16, BW=67.7KiB/s (69.3kB/s)(68.0KiB/1005msec) 00:16:56.801 slat (nsec): min=7827, max=25615, avg=24217.76, stdev=4230.62 00:16:56.801 clat (usec): min=1085, max=42050, avg=39542.35, stdev=9910.70 00:16:56.801 lat (usec): min=1109, max=42075, avg=39566.56, stdev=9910.63 00:16:56.801 clat percentiles (usec): 00:16:56.801 | 1.00th=[ 1090], 5.00th=[ 1090], 10.00th=[41681], 20.00th=[41681], 00:16:56.801 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:56.801 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:56.801 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:56.801 | 99.99th=[42206] 00:16:56.801 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:16:56.801 slat (usec): min=7, max=27206, avg=78.72, stdev=1201.30 00:16:56.801 clat (usec): min=115, max=881, avg=563.72, stdev=149.24 00:16:56.801 lat (usec): min=125, max=27849, avg=642.45, stdev=1214.90 00:16:56.801 clat percentiles (usec): 00:16:56.801 | 1.00th=[ 237], 5.00th=[ 247], 10.00th=[ 330], 20.00th=[ 437], 00:16:56.801 | 30.00th=[ 515], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 627], 00:16:56.801 | 70.00th=[ 652], 80.00th=[ 693], 90.00th=[ 725], 95.00th=[ 750], 00:16:56.801 | 99.00th=[ 791], 99.50th=[ 832], 99.90th=[ 881], 99.95th=[ 881], 00:16:56.801 | 99.99th=[ 881] 00:16:56.801 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:56.801 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:56.801 lat (usec) : 250=5.48%, 500=20.60%, 750=65.78%, 1000=4.91% 00:16:56.801 lat (msec) : 2=0.19%, 50=3.02% 00:16:56.801 cpu : usr=0.60%, sys=1.29%, ctx=532, majf=0, minf=1 00:16:56.801 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:56.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.801 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.801 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.801 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:56.801 00:16:56.801 Run status group 0 (all jobs): 00:16:56.801 READ: bw=67.7KiB/s (69.3kB/s), 67.7KiB/s-67.7KiB/s (69.3kB/s-69.3kB/s), io=68.0KiB (69.6kB), run=1005-1005msec 00:16:56.801 WRITE: bw=2038KiB/s (2087kB/s), 2038KiB/s-2038KiB/s (2087kB/s-2087kB/s), io=2048KiB (2097kB), run=1005-1005msec 00:16:56.801 00:16:56.801 Disk stats (read/write): 00:16:56.801 nvme0n1: ios=39/512, merge=0/0, ticks=1514/276, in_queue=1790, util=98.90% 00:16:56.801 14:53:39 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:56.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:56.801 14:53:39 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:56.801 14:53:39 -- common/autotest_common.sh@1205 -- # local i=0 00:16:56.801 14:53:39 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:56.801 14:53:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:56.801 14:53:39 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:56.801 14:53:39 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:56.801 14:53:39 -- common/autotest_common.sh@1217 -- # return 0 00:16:56.801 14:53:39 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:56.801 14:53:39 -- target/nmic.sh@53 -- # nvmftestfini 00:16:56.801 14:53:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:56.801 14:53:39 -- nvmf/common.sh@117 -- # sync 00:16:56.801 14:53:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:56.801 14:53:39 -- nvmf/common.sh@120 -- # set +e 00:16:56.801 14:53:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:56.801 14:53:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:56.801 rmmod nvme_tcp 00:16:56.801 rmmod nvme_fabrics 00:16:56.801 rmmod nvme_keyring 00:16:56.801 14:53:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:56.801 14:53:39 -- nvmf/common.sh@124 -- # set -e 00:16:56.801 14:53:39 -- nvmf/common.sh@125 -- # return 0 00:16:56.801 14:53:39 -- nvmf/common.sh@478 -- # '[' -n 1054535 ']' 00:16:56.801 14:53:39 -- nvmf/common.sh@479 -- # killprocess 1054535 00:16:56.801 14:53:39 -- common/autotest_common.sh@936 -- # '[' -z 1054535 ']' 00:16:56.801 14:53:39 -- common/autotest_common.sh@940 -- # kill -0 1054535 00:16:56.801 14:53:39 -- common/autotest_common.sh@941 -- # uname 00:16:56.801 14:53:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:56.801 14:53:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1054535 00:16:56.801 14:53:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:56.801 14:53:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:56.801 14:53:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1054535' 00:16:56.801 killing process with pid 1054535 00:16:56.801 14:53:39 -- common/autotest_common.sh@955 -- # kill 1054535 00:16:56.801 14:53:39 -- common/autotest_common.sh@960 -- # wait 1054535 00:16:57.063 14:53:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:57.063 14:53:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:57.063 14:53:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:57.063 14:53:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:57.063 14:53:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:57.063 14:53:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.063 14:53:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:57.063 14:53:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.977 14:53:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:58.977 00:16:58.977 real 0m17.419s 00:16:58.977 user 0m44.830s 00:16:58.977 sys 0m6.111s 00:16:58.977 14:53:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:58.977 14:53:41 -- common/autotest_common.sh@10 -- # set +x 00:16:58.977 ************************************ 00:16:58.977 END TEST nvmf_nmic 00:16:58.977 ************************************ 00:16:58.977 14:53:41 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:58.977 14:53:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:58.977 14:53:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:58.977 14:53:41 -- common/autotest_common.sh@10 -- # set +x 00:16:59.238 ************************************ 00:16:59.238 START TEST nvmf_fio_target 00:16:59.238 ************************************ 00:16:59.238 14:53:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:59.238 * Looking for test storage... 00:16:59.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:59.238 14:53:41 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:59.238 14:53:41 -- nvmf/common.sh@7 -- # uname -s 00:16:59.238 14:53:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.238 14:53:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.238 14:53:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.238 14:53:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.238 14:53:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.238 14:53:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.238 14:53:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.238 14:53:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.238 14:53:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.238 14:53:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.238 14:53:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:59.238 14:53:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:59.238 14:53:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.238 14:53:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.238 14:53:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:59.238 14:53:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:59.238 14:53:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:59.500 14:53:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.500 14:53:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.500 14:53:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.500 14:53:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.500 14:53:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.500 14:53:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.500 14:53:41 -- paths/export.sh@5 -- # export PATH 00:16:59.500 14:53:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.500 14:53:41 -- nvmf/common.sh@47 -- # : 0 00:16:59.500 14:53:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:59.500 14:53:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:59.500 14:53:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:59.500 14:53:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.500 14:53:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.500 14:53:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:59.500 14:53:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:59.500 14:53:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:59.500 14:53:41 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:59.500 14:53:41 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:59.500 14:53:41 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:59.500 14:53:41 -- target/fio.sh@16 -- # nvmftestinit 00:16:59.500 14:53:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:59.500 14:53:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:59.500 14:53:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:59.500 14:53:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:59.500 14:53:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:59.500 14:53:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.500 14:53:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:59.500 14:53:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.500 14:53:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:59.500 14:53:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:59.500 14:53:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:59.500 14:53:41 -- common/autotest_common.sh@10 -- # set +x 00:17:06.086 14:53:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:06.086 14:53:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:06.086 14:53:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:06.086 14:53:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:06.086 14:53:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:06.086 14:53:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:06.086 14:53:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:06.086 14:53:48 -- nvmf/common.sh@295 -- # net_devs=() 00:17:06.086 14:53:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:06.086 14:53:48 -- nvmf/common.sh@296 -- # e810=() 00:17:06.086 14:53:48 -- nvmf/common.sh@296 -- # local -ga e810 00:17:06.086 14:53:48 -- nvmf/common.sh@297 -- # x722=() 00:17:06.086 14:53:48 -- nvmf/common.sh@297 -- # local -ga x722 00:17:06.086 14:53:48 -- nvmf/common.sh@298 -- # mlx=() 00:17:06.086 14:53:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:06.086 14:53:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:06.086 14:53:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:06.086 14:53:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:06.086 14:53:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:06.086 14:53:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:06.086 14:53:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:06.086 14:53:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:06.086 14:53:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:06.086 14:53:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:06.086 14:53:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:06.086 14:53:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:06.086 14:53:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:06.086 14:53:48 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:06.086 14:53:48 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:06.086 14:53:48 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:06.086 14:53:48 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:06.086 14:53:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:06.086 14:53:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:06.086 14:53:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:06.086 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:06.086 14:53:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:06.086 14:53:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:06.086 14:53:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.086 14:53:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.086 14:53:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:06.086 14:53:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:06.086 14:53:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:06.086 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:06.086 14:53:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:06.086 14:53:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:06.086 14:53:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.086 14:53:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.086 14:53:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:06.086 14:53:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:06.086 14:53:48 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:06.086 14:53:48 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:06.086 14:53:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:06.086 14:53:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.086 14:53:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:06.086 14:53:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.086 14:53:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:06.086 Found net devices under 0000:31:00.0: cvl_0_0 00:17:06.086 14:53:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.086 14:53:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:06.086 14:53:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.086 14:53:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:06.086 14:53:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.086 14:53:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:06.086 Found net devices under 0000:31:00.1: cvl_0_1 00:17:06.086 14:53:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.086 14:53:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:06.086 14:53:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:06.087 14:53:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:06.087 14:53:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:06.087 14:53:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:06.087 14:53:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:06.087 14:53:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:06.087 14:53:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:06.087 14:53:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:06.087 14:53:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:06.087 14:53:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:06.087 14:53:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:06.087 14:53:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:06.087 14:53:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:06.087 14:53:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:06.087 14:53:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:06.087 14:53:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:06.087 14:53:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:06.087 14:53:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:06.087 14:53:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:06.087 14:53:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:06.087 14:53:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:06.087 14:53:48 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:06.087 14:53:48 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:06.087 14:53:48 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:06.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:06.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.562 ms 00:17:06.087 00:17:06.087 --- 10.0.0.2 ping statistics --- 00:17:06.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.087 rtt min/avg/max/mdev = 0.562/0.562/0.562/0.000 ms 00:17:06.087 14:53:48 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:06.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:06.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:17:06.087 00:17:06.087 --- 10.0.0.1 ping statistics --- 00:17:06.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.087 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:17:06.087 14:53:48 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:06.087 14:53:48 -- nvmf/common.sh@411 -- # return 0 00:17:06.087 14:53:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:06.087 14:53:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:06.087 14:53:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:06.087 14:53:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:06.087 14:53:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:06.087 14:53:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:06.087 14:53:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:06.087 14:53:48 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:06.087 14:53:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:06.087 14:53:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:06.087 14:53:48 -- common/autotest_common.sh@10 -- # set +x 00:17:06.087 14:53:48 -- nvmf/common.sh@470 -- # nvmfpid=1060336 00:17:06.087 14:53:48 -- nvmf/common.sh@471 -- # waitforlisten 1060336 00:17:06.087 14:53:48 -- common/autotest_common.sh@817 -- # '[' -z 1060336 ']' 00:17:06.087 14:53:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.087 14:53:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:06.087 14:53:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.087 14:53:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:06.087 14:53:48 -- common/autotest_common.sh@10 -- # set +x 00:17:06.087 14:53:48 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:06.348 [2024-04-26 14:53:48.759057] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:17:06.348 [2024-04-26 14:53:48.759105] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.348 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.348 [2024-04-26 14:53:48.825375] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:06.348 [2024-04-26 14:53:48.890688] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.348 [2024-04-26 14:53:48.890724] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.348 [2024-04-26 14:53:48.890733] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.348 [2024-04-26 14:53:48.890741] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.348 [2024-04-26 14:53:48.890748] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.348 [2024-04-26 14:53:48.890922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.348 [2024-04-26 14:53:48.891168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.348 [2024-04-26 14:53:48.891325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:06.348 [2024-04-26 14:53:48.891326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.920 14:53:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:06.920 14:53:49 -- common/autotest_common.sh@850 -- # return 0 00:17:06.920 14:53:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:06.920 14:53:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:06.920 14:53:49 -- common/autotest_common.sh@10 -- # set +x 00:17:06.920 14:53:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.920 14:53:49 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:07.182 [2024-04-26 14:53:49.705872] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.182 14:53:49 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:07.442 14:53:49 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:07.442 14:53:49 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:07.442 14:53:50 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:07.442 14:53:50 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:07.702 14:53:50 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:07.702 14:53:50 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:07.963 14:53:50 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:07.963 14:53:50 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:07.963 14:53:50 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:08.224 14:53:50 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:08.224 14:53:50 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:08.484 14:53:50 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:08.484 14:53:50 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:08.484 14:53:51 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:08.484 14:53:51 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:08.746 14:53:51 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:09.008 14:53:51 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:09.008 14:53:51 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:09.008 14:53:51 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:09.008 14:53:51 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:09.269 14:53:51 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:09.530 [2024-04-26 14:53:51.948087] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.530 14:53:51 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:09.530 14:53:52 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:09.791 14:53:52 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:11.177 14:53:53 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:11.177 14:53:53 -- common/autotest_common.sh@1184 -- # local i=0 00:17:11.177 14:53:53 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:17:11.177 14:53:53 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:17:11.177 14:53:53 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:17:11.177 14:53:53 -- common/autotest_common.sh@1191 -- # sleep 2 00:17:13.722 14:53:55 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:17:13.722 14:53:55 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:17:13.723 14:53:55 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:17:13.723 14:53:55 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:17:13.723 14:53:55 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:17:13.723 14:53:55 -- common/autotest_common.sh@1194 -- # return 0 00:17:13.723 14:53:55 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:13.723 [global] 00:17:13.723 thread=1 00:17:13.723 invalidate=1 00:17:13.723 rw=write 00:17:13.723 time_based=1 00:17:13.723 runtime=1 00:17:13.723 ioengine=libaio 00:17:13.723 direct=1 00:17:13.723 bs=4096 00:17:13.723 iodepth=1 00:17:13.723 norandommap=0 00:17:13.723 numjobs=1 00:17:13.723 00:17:13.723 verify_dump=1 00:17:13.723 verify_backlog=512 00:17:13.723 verify_state_save=0 00:17:13.723 do_verify=1 00:17:13.723 verify=crc32c-intel 00:17:13.723 [job0] 00:17:13.723 filename=/dev/nvme0n1 00:17:13.723 [job1] 00:17:13.723 filename=/dev/nvme0n2 00:17:13.723 [job2] 00:17:13.723 filename=/dev/nvme0n3 00:17:13.723 [job3] 00:17:13.723 filename=/dev/nvme0n4 00:17:13.723 Could not set queue depth (nvme0n1) 00:17:13.723 Could not set queue depth (nvme0n2) 00:17:13.723 Could not set queue depth (nvme0n3) 00:17:13.723 Could not set queue depth (nvme0n4) 00:17:13.723 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.723 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.723 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.723 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.723 fio-3.35 00:17:13.723 Starting 4 threads 00:17:15.133 00:17:15.133 job0: (groupid=0, jobs=1): err= 0: pid=1062181: Fri Apr 26 14:53:57 2024 00:17:15.133 read: IOPS=16, BW=67.7KiB/s (69.4kB/s)(68.0KiB/1004msec) 00:17:15.133 slat (nsec): min=26042, max=41653, avg=27381.76, stdev=3695.91 00:17:15.133 clat (usec): min=1159, max=42045, avg=39526.17, stdev=9888.39 00:17:15.133 lat (usec): min=1200, max=42071, avg=39553.55, stdev=9884.72 00:17:15.133 clat percentiles (usec): 00:17:15.133 | 1.00th=[ 1156], 5.00th=[ 1156], 10.00th=[41157], 20.00th=[41681], 00:17:15.133 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:17:15.133 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:15.133 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:15.133 | 99.99th=[42206] 00:17:15.133 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:17:15.133 slat (nsec): min=8953, max=58551, avg=29054.42, stdev=10785.20 00:17:15.134 clat (usec): min=255, max=1292, avg=611.01, stdev=145.65 00:17:15.134 lat (usec): min=265, max=1325, avg=640.07, stdev=151.08 00:17:15.134 clat percentiles (usec): 00:17:15.134 | 1.00th=[ 293], 5.00th=[ 367], 10.00th=[ 408], 20.00th=[ 478], 00:17:15.134 | 30.00th=[ 529], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 652], 00:17:15.134 | 70.00th=[ 685], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 832], 00:17:15.134 | 99.00th=[ 979], 99.50th=[ 1106], 99.90th=[ 1287], 99.95th=[ 1287], 00:17:15.134 | 99.99th=[ 1287] 00:17:15.134 bw ( KiB/s): min= 4096, max= 4096, per=46.02%, avg=4096.00, stdev= 0.00, samples=1 00:17:15.134 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:15.134 lat (usec) : 500=23.82%, 750=58.60%, 1000=13.61% 00:17:15.134 lat (msec) : 2=0.95%, 50=3.02% 00:17:15.134 cpu : usr=0.80%, sys=2.09%, ctx=531, majf=0, minf=1 00:17:15.134 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:15.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.134 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:15.134 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:15.134 job1: (groupid=0, jobs=1): err= 0: pid=1062199: Fri Apr 26 14:53:57 2024 00:17:15.134 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:15.134 slat (nsec): min=7658, max=58880, avg=26933.44, stdev=3184.98 00:17:15.134 clat (usec): min=781, max=1221, avg=997.50, stdev=60.04 00:17:15.134 lat (usec): min=808, max=1248, avg=1024.43, stdev=60.02 00:17:15.134 clat percentiles (usec): 00:17:15.134 | 1.00th=[ 816], 5.00th=[ 889], 10.00th=[ 922], 20.00th=[ 963], 00:17:15.134 | 30.00th=[ 971], 40.00th=[ 988], 50.00th=[ 1004], 60.00th=[ 1020], 00:17:15.134 | 70.00th=[ 1029], 80.00th=[ 1037], 90.00th=[ 1057], 95.00th=[ 1090], 00:17:15.134 | 99.00th=[ 1172], 99.50th=[ 1172], 99.90th=[ 1221], 99.95th=[ 1221], 00:17:15.134 | 99.99th=[ 1221] 00:17:15.134 write: IOPS=706, BW=2825KiB/s (2893kB/s)(2828KiB/1001msec); 0 zone resets 00:17:15.134 slat (usec): min=9, max=2235, avg=33.35, stdev=83.60 00:17:15.134 clat (usec): min=234, max=1079, avg=624.91, stdev=134.71 00:17:15.134 lat (usec): min=244, max=2780, avg=658.26, stdev=160.95 00:17:15.134 clat percentiles (usec): 00:17:15.134 | 1.00th=[ 343], 5.00th=[ 412], 10.00th=[ 453], 20.00th=[ 510], 00:17:15.134 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 660], 00:17:15.134 | 70.00th=[ 693], 80.00th=[ 734], 90.00th=[ 791], 95.00th=[ 857], 00:17:15.134 | 99.00th=[ 979], 99.50th=[ 1029], 99.90th=[ 1074], 99.95th=[ 1074], 00:17:15.134 | 99.99th=[ 1074] 00:17:15.134 bw ( KiB/s): min= 4096, max= 4096, per=46.02%, avg=4096.00, stdev= 0.00, samples=1 00:17:15.134 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:15.134 lat (usec) : 250=0.08%, 500=10.25%, 750=37.74%, 1000=30.76% 00:17:15.134 lat (msec) : 2=21.16% 00:17:15.134 cpu : usr=3.40%, sys=3.80%, ctx=1221, majf=0, minf=1 00:17:15.134 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:15.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.134 issued rwts: total=512,707,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:15.134 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:15.134 job2: (groupid=0, jobs=1): err= 0: pid=1062218: Fri Apr 26 14:53:57 2024 00:17:15.134 read: IOPS=18, BW=75.4KiB/s (77.2kB/s)(76.0KiB/1008msec) 00:17:15.134 slat (nsec): min=26933, max=27571, avg=27153.95, stdev=158.22 00:17:15.134 clat (usec): min=984, max=41806, avg=38911.75, stdev=9186.75 00:17:15.134 lat (usec): min=1012, max=41833, avg=38938.91, stdev=9186.71 00:17:15.134 clat percentiles (usec): 00:17:15.134 | 1.00th=[ 988], 5.00th=[ 988], 10.00th=[40633], 20.00th=[40633], 00:17:15.134 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:15.134 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:17:15.134 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:17:15.134 | 99.99th=[41681] 00:17:15.134 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:17:15.134 slat (nsec): min=9152, max=55776, avg=32730.36, stdev=9159.28 00:17:15.134 clat (usec): min=189, max=1085, avg=482.43, stdev=138.84 00:17:15.134 lat (usec): min=225, max=1119, avg=515.16, stdev=141.82 00:17:15.134 clat percentiles (usec): 00:17:15.134 | 1.00th=[ 239], 5.00th=[ 281], 10.00th=[ 318], 20.00th=[ 351], 00:17:15.134 | 30.00th=[ 396], 40.00th=[ 445], 50.00th=[ 482], 60.00th=[ 510], 00:17:15.134 | 70.00th=[ 545], 80.00th=[ 586], 90.00th=[ 660], 95.00th=[ 709], 00:17:15.134 | 99.00th=[ 865], 99.50th=[ 930], 99.90th=[ 1090], 99.95th=[ 1090], 00:17:15.134 | 99.99th=[ 1090] 00:17:15.134 bw ( KiB/s): min= 4096, max= 4096, per=46.02%, avg=4096.00, stdev= 0.00, samples=1 00:17:15.134 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:15.134 lat (usec) : 250=2.07%, 500=52.73%, 750=37.85%, 1000=3.58% 00:17:15.134 lat (msec) : 2=0.38%, 50=3.39% 00:17:15.134 cpu : usr=1.69%, sys=1.49%, ctx=532, majf=0, minf=1 00:17:15.134 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:15.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.134 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:15.134 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:15.134 job3: (groupid=0, jobs=1): err= 0: pid=1062226: Fri Apr 26 14:53:57 2024 00:17:15.134 read: IOPS=24, BW=99.6KiB/s (102kB/s)(100KiB/1004msec) 00:17:15.134 slat (nsec): min=8500, max=26572, avg=24862.12, stdev=4913.07 00:17:15.134 clat (usec): min=585, max=41119, avg=28125.03, stdev=19148.28 00:17:15.134 lat (usec): min=594, max=41146, avg=28149.89, stdev=19150.42 00:17:15.134 clat percentiles (usec): 00:17:15.134 | 1.00th=[ 586], 5.00th=[ 717], 10.00th=[ 742], 20.00th=[ 816], 00:17:15.134 | 30.00th=[ 898], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:15.134 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:17:15.134 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:17:15.134 | 99.99th=[41157] 00:17:15.134 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:17:15.134 slat (usec): min=10, max=43296, avg=117.84, stdev=1911.99 00:17:15.134 clat (usec): min=154, max=682, avg=459.54, stdev=113.51 00:17:15.134 lat (usec): min=184, max=43574, avg=577.38, stdev=1907.51 00:17:15.134 clat percentiles (usec): 00:17:15.134 | 1.00th=[ 239], 5.00th=[ 289], 10.00th=[ 306], 20.00th=[ 334], 00:17:15.134 | 30.00th=[ 367], 40.00th=[ 412], 50.00th=[ 510], 60.00th=[ 537], 00:17:15.134 | 70.00th=[ 553], 80.00th=[ 570], 90.00th=[ 578], 95.00th=[ 594], 00:17:15.134 | 99.00th=[ 635], 99.50th=[ 668], 99.90th=[ 685], 99.95th=[ 685], 00:17:15.134 | 99.99th=[ 685] 00:17:15.134 bw ( KiB/s): min= 4096, max= 4096, per=46.02%, avg=4096.00, stdev= 0.00, samples=1 00:17:15.134 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:15.134 lat (usec) : 250=1.30%, 500=45.07%, 750=49.53%, 1000=0.93% 00:17:15.134 lat (msec) : 50=3.17% 00:17:15.134 cpu : usr=0.70%, sys=1.69%, ctx=539, majf=0, minf=1 00:17:15.134 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:15.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.134 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:15.134 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:15.134 00:17:15.134 Run status group 0 (all jobs): 00:17:15.134 READ: bw=2274KiB/s (2328kB/s), 67.7KiB/s-2046KiB/s (69.4kB/s-2095kB/s), io=2292KiB (2347kB), run=1001-1008msec 00:17:15.134 WRITE: bw=8901KiB/s (9114kB/s), 2032KiB/s-2825KiB/s (2081kB/s-2893kB/s), io=8972KiB (9187kB), run=1001-1008msec 00:17:15.134 00:17:15.134 Disk stats (read/write): 00:17:15.134 nvme0n1: ios=69/512, merge=0/0, ticks=738/259, in_queue=997, util=85.47% 00:17:15.134 nvme0n2: ios=527/512, merge=0/0, ticks=674/255, in_queue=929, util=87.96% 00:17:15.134 nvme0n3: ios=36/512, merge=0/0, ticks=1409/198, in_queue=1607, util=92.08% 00:17:15.134 nvme0n4: ios=64/512, merge=0/0, ticks=850/228, in_queue=1078, util=97.12% 00:17:15.134 14:53:57 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:15.134 [global] 00:17:15.134 thread=1 00:17:15.134 invalidate=1 00:17:15.134 rw=randwrite 00:17:15.134 time_based=1 00:17:15.134 runtime=1 00:17:15.134 ioengine=libaio 00:17:15.134 direct=1 00:17:15.134 bs=4096 00:17:15.134 iodepth=1 00:17:15.134 norandommap=0 00:17:15.134 numjobs=1 00:17:15.134 00:17:15.134 verify_dump=1 00:17:15.134 verify_backlog=512 00:17:15.134 verify_state_save=0 00:17:15.134 do_verify=1 00:17:15.134 verify=crc32c-intel 00:17:15.134 [job0] 00:17:15.134 filename=/dev/nvme0n1 00:17:15.134 [job1] 00:17:15.134 filename=/dev/nvme0n2 00:17:15.134 [job2] 00:17:15.134 filename=/dev/nvme0n3 00:17:15.134 [job3] 00:17:15.134 filename=/dev/nvme0n4 00:17:15.134 Could not set queue depth (nvme0n1) 00:17:15.134 Could not set queue depth (nvme0n2) 00:17:15.134 Could not set queue depth (nvme0n3) 00:17:15.134 Could not set queue depth (nvme0n4) 00:17:15.403 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:15.403 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:15.403 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:15.403 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:15.403 fio-3.35 00:17:15.403 Starting 4 threads 00:17:16.810 00:17:16.810 job0: (groupid=0, jobs=1): err= 0: pid=1062654: Fri Apr 26 14:53:59 2024 00:17:16.810 read: IOPS=16, BW=67.3KiB/s (68.9kB/s)(68.0KiB/1011msec) 00:17:16.810 slat (nsec): min=7197, max=24950, avg=22820.18, stdev=5447.91 00:17:16.810 clat (usec): min=938, max=42032, avg=39536.03, stdev=9946.59 00:17:16.810 lat (usec): min=948, max=42057, avg=39558.85, stdev=9950.03 00:17:16.810 clat percentiles (usec): 00:17:16.810 | 1.00th=[ 938], 5.00th=[ 938], 10.00th=[41681], 20.00th=[41681], 00:17:16.810 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:16.810 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:16.810 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:16.810 | 99.99th=[42206] 00:17:16.810 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:17:16.810 slat (nsec): min=9407, max=50827, avg=26628.62, stdev=9715.47 00:17:16.810 clat (usec): min=360, max=960, avg=626.51, stdev=114.45 00:17:16.810 lat (usec): min=378, max=991, avg=653.14, stdev=119.44 00:17:16.810 clat percentiles (usec): 00:17:16.810 | 1.00th=[ 375], 5.00th=[ 392], 10.00th=[ 465], 20.00th=[ 523], 00:17:16.810 | 30.00th=[ 586], 40.00th=[ 611], 50.00th=[ 635], 60.00th=[ 660], 00:17:16.810 | 70.00th=[ 701], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 791], 00:17:16.810 | 99.00th=[ 857], 99.50th=[ 889], 99.90th=[ 963], 99.95th=[ 963], 00:17:16.810 | 99.99th=[ 963] 00:17:16.810 bw ( KiB/s): min= 4096, max= 4096, per=46.23%, avg=4096.00, stdev= 0.00, samples=1 00:17:16.810 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:16.810 lat (usec) : 500=14.37%, 750=69.94%, 1000=12.67% 00:17:16.810 lat (msec) : 50=3.02% 00:17:16.810 cpu : usr=0.79%, sys=1.19%, ctx=532, majf=0, minf=1 00:17:16.810 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:16.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.810 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.810 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:16.810 job1: (groupid=0, jobs=1): err= 0: pid=1062671: Fri Apr 26 14:53:59 2024 00:17:16.810 read: IOPS=15, BW=62.9KiB/s (64.4kB/s)(64.0KiB/1018msec) 00:17:16.810 slat (nsec): min=26176, max=26718, avg=26455.94, stdev=156.01 00:17:16.810 clat (usec): min=41085, max=42083, avg=41732.42, stdev=341.15 00:17:16.810 lat (usec): min=41111, max=42109, avg=41758.87, stdev=341.21 00:17:16.810 clat percentiles (usec): 00:17:16.810 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:17:16.810 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:17:16.810 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:16.810 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:16.810 | 99.99th=[42206] 00:17:16.810 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:17:16.810 slat (nsec): min=8966, max=51015, avg=30412.65, stdev=8810.12 00:17:16.810 clat (usec): min=238, max=1039, avg=643.75, stdev=149.48 00:17:16.810 lat (usec): min=247, max=1072, avg=674.16, stdev=152.56 00:17:16.810 clat percentiles (usec): 00:17:16.810 | 1.00th=[ 302], 5.00th=[ 379], 10.00th=[ 445], 20.00th=[ 515], 00:17:16.810 | 30.00th=[ 570], 40.00th=[ 611], 50.00th=[ 652], 60.00th=[ 693], 00:17:16.810 | 70.00th=[ 725], 80.00th=[ 766], 90.00th=[ 824], 95.00th=[ 881], 00:17:16.810 | 99.00th=[ 988], 99.50th=[ 996], 99.90th=[ 1037], 99.95th=[ 1037], 00:17:16.810 | 99.99th=[ 1037] 00:17:16.810 bw ( KiB/s): min= 4096, max= 4096, per=46.23%, avg=4096.00, stdev= 0.00, samples=1 00:17:16.810 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:16.810 lat (usec) : 250=0.57%, 500=15.91%, 750=58.52%, 1000=21.59% 00:17:16.810 lat (msec) : 2=0.38%, 50=3.03% 00:17:16.810 cpu : usr=0.79%, sys=2.26%, ctx=529, majf=0, minf=1 00:17:16.810 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:16.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.810 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.810 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:16.810 job2: (groupid=0, jobs=1): err= 0: pid=1062690: Fri Apr 26 14:53:59 2024 00:17:16.810 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:16.810 slat (nsec): min=7096, max=57770, avg=25005.78, stdev=3202.94 00:17:16.810 clat (usec): min=476, max=1435, avg=1093.45, stdev=128.92 00:17:16.810 lat (usec): min=488, max=1459, avg=1118.46, stdev=129.11 00:17:16.810 clat percentiles (usec): 00:17:16.810 | 1.00th=[ 766], 5.00th=[ 873], 10.00th=[ 938], 20.00th=[ 1004], 00:17:16.810 | 30.00th=[ 1037], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1139], 00:17:16.810 | 70.00th=[ 1172], 80.00th=[ 1205], 90.00th=[ 1237], 95.00th=[ 1287], 00:17:16.810 | 99.00th=[ 1369], 99.50th=[ 1385], 99.90th=[ 1434], 99.95th=[ 1434], 00:17:16.810 | 99.99th=[ 1434] 00:17:16.810 write: IOPS=720, BW=2881KiB/s (2950kB/s)(2884KiB/1001msec); 0 zone resets 00:17:16.810 slat (nsec): min=9025, max=64992, avg=26964.19, stdev=8897.91 00:17:16.810 clat (usec): min=204, max=907, avg=552.63, stdev=114.05 00:17:16.810 lat (usec): min=213, max=937, avg=579.60, stdev=116.98 00:17:16.810 clat percentiles (usec): 00:17:16.810 | 1.00th=[ 277], 5.00th=[ 363], 10.00th=[ 396], 20.00th=[ 465], 00:17:16.810 | 30.00th=[ 498], 40.00th=[ 529], 50.00th=[ 553], 60.00th=[ 594], 00:17:16.810 | 70.00th=[ 619], 80.00th=[ 644], 90.00th=[ 693], 95.00th=[ 725], 00:17:16.810 | 99.00th=[ 816], 99.50th=[ 865], 99.90th=[ 906], 99.95th=[ 906], 00:17:16.810 | 99.99th=[ 906] 00:17:16.810 bw ( KiB/s): min= 4096, max= 4096, per=46.23%, avg=4096.00, stdev= 0.00, samples=1 00:17:16.810 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:16.810 lat (usec) : 250=0.32%, 500=17.68%, 750=38.77%, 1000=10.06% 00:17:16.810 lat (msec) : 2=33.17% 00:17:16.810 cpu : usr=1.60%, sys=3.50%, ctx=1233, majf=0, minf=1 00:17:16.810 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:16.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.810 issued rwts: total=512,721,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.810 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:16.810 job3: (groupid=0, jobs=1): err= 0: pid=1062697: Fri Apr 26 14:53:59 2024 00:17:16.810 read: IOPS=16, BW=66.7KiB/s (68.3kB/s)(68.0KiB/1019msec) 00:17:16.810 slat (nsec): min=24502, max=25042, avg=24745.82, stdev=142.96 00:17:16.810 clat (usec): min=1080, max=42076, avg=39447.92, stdev=9891.18 00:17:16.810 lat (usec): min=1105, max=42101, avg=39472.67, stdev=9891.20 00:17:16.810 clat percentiles (usec): 00:17:16.810 | 1.00th=[ 1074], 5.00th=[ 1074], 10.00th=[41157], 20.00th=[41681], 00:17:16.810 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:17:16.810 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:16.810 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:16.810 | 99.99th=[42206] 00:17:16.810 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:17:16.810 slat (nsec): min=9614, max=51472, avg=29892.13, stdev=8527.75 00:17:16.810 clat (usec): min=286, max=959, avg=640.80, stdev=123.87 00:17:16.810 lat (usec): min=318, max=991, avg=670.69, stdev=126.96 00:17:16.810 clat percentiles (usec): 00:17:16.810 | 1.00th=[ 347], 5.00th=[ 424], 10.00th=[ 474], 20.00th=[ 529], 00:17:16.810 | 30.00th=[ 586], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 685], 00:17:16.810 | 70.00th=[ 709], 80.00th=[ 750], 90.00th=[ 791], 95.00th=[ 824], 00:17:16.810 | 99.00th=[ 938], 99.50th=[ 947], 99.90th=[ 963], 99.95th=[ 963], 00:17:16.810 | 99.99th=[ 963] 00:17:16.810 bw ( KiB/s): min= 4096, max= 4096, per=46.23%, avg=4096.00, stdev= 0.00, samples=1 00:17:16.810 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:16.810 lat (usec) : 500=14.18%, 750=64.65%, 1000=17.96% 00:17:16.810 lat (msec) : 2=0.19%, 50=3.02% 00:17:16.810 cpu : usr=0.88%, sys=1.38%, ctx=530, majf=0, minf=1 00:17:16.810 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:16.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.810 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.810 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:16.810 00:17:16.810 Run status group 0 (all jobs): 00:17:16.810 READ: bw=2206KiB/s (2259kB/s), 62.9KiB/s-2046KiB/s (64.4kB/s-2095kB/s), io=2248KiB (2302kB), run=1001-1019msec 00:17:16.810 WRITE: bw=8860KiB/s (9072kB/s), 2010KiB/s-2881KiB/s (2058kB/s-2950kB/s), io=9028KiB (9245kB), run=1001-1019msec 00:17:16.810 00:17:16.810 Disk stats (read/write): 00:17:16.810 nvme0n1: ios=64/512, merge=0/0, ticks=1123/312, in_queue=1435, util=96.99% 00:17:16.810 nvme0n2: ios=54/512, merge=0/0, ticks=679/274, in_queue=953, util=100.00% 00:17:16.810 nvme0n3: ios=478/512, merge=0/0, ticks=506/266, in_queue=772, util=88.41% 00:17:16.810 nvme0n4: ios=34/512, merge=0/0, ticks=1383/296, in_queue=1679, util=97.01% 00:17:16.810 14:53:59 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:16.810 [global] 00:17:16.810 thread=1 00:17:16.810 invalidate=1 00:17:16.810 rw=write 00:17:16.810 time_based=1 00:17:16.810 runtime=1 00:17:16.810 ioengine=libaio 00:17:16.810 direct=1 00:17:16.810 bs=4096 00:17:16.810 iodepth=128 00:17:16.810 norandommap=0 00:17:16.810 numjobs=1 00:17:16.810 00:17:16.810 verify_dump=1 00:17:16.810 verify_backlog=512 00:17:16.810 verify_state_save=0 00:17:16.810 do_verify=1 00:17:16.810 verify=crc32c-intel 00:17:16.810 [job0] 00:17:16.810 filename=/dev/nvme0n1 00:17:16.810 [job1] 00:17:16.810 filename=/dev/nvme0n2 00:17:16.810 [job2] 00:17:16.810 filename=/dev/nvme0n3 00:17:16.810 [job3] 00:17:16.810 filename=/dev/nvme0n4 00:17:16.810 Could not set queue depth (nvme0n1) 00:17:16.810 Could not set queue depth (nvme0n2) 00:17:16.811 Could not set queue depth (nvme0n3) 00:17:16.811 Could not set queue depth (nvme0n4) 00:17:17.077 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:17.077 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:17.077 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:17.077 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:17.077 fio-3.35 00:17:17.077 Starting 4 threads 00:17:18.575 00:17:18.575 job0: (groupid=0, jobs=1): err= 0: pid=1063163: Fri Apr 26 14:54:00 2024 00:17:18.575 read: IOPS=3766, BW=14.7MiB/s (15.4MB/s)(14.8MiB/1005msec) 00:17:18.575 slat (nsec): min=907, max=17707k, avg=105697.39, stdev=903351.41 00:17:18.575 clat (usec): min=3057, max=51468, avg=14475.85, stdev=7652.72 00:17:18.575 lat (usec): min=3063, max=51476, avg=14581.55, stdev=7716.71 00:17:18.575 clat percentiles (usec): 00:17:18.575 | 1.00th=[ 4424], 5.00th=[ 6128], 10.00th=[ 7570], 20.00th=[ 7963], 00:17:18.575 | 30.00th=[ 8848], 40.00th=[10683], 50.00th=[13698], 60.00th=[15139], 00:17:18.575 | 70.00th=[17171], 80.00th=[19006], 90.00th=[22938], 95.00th=[28181], 00:17:18.575 | 99.00th=[46400], 99.50th=[49546], 99.90th=[51643], 99.95th=[51643], 00:17:18.575 | 99.99th=[51643] 00:17:18.575 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:17:18.575 slat (nsec): min=1643, max=27540k, avg=117457.32, stdev=770886.72 00:17:18.575 clat (usec): min=756, max=73085, avg=16605.84, stdev=10917.30 00:17:18.575 lat (usec): min=766, max=75316, avg=16723.30, stdev=10989.24 00:17:18.575 clat percentiles (usec): 00:17:18.575 | 1.00th=[ 2835], 5.00th=[ 5604], 10.00th=[ 6718], 20.00th=[ 9110], 00:17:18.575 | 30.00th=[11469], 40.00th=[14222], 50.00th=[15139], 60.00th=[15664], 00:17:18.575 | 70.00th=[16450], 80.00th=[18744], 90.00th=[29492], 95.00th=[37487], 00:17:18.575 | 99.00th=[66323], 99.50th=[70779], 99.90th=[72877], 99.95th=[72877], 00:17:18.575 | 99.99th=[72877] 00:17:18.575 bw ( KiB/s): min=16208, max=16560, per=17.53%, avg=16384.00, stdev=248.90, samples=2 00:17:18.575 iops : min= 4052, max= 4140, avg=4096.00, stdev=62.23, samples=2 00:17:18.575 lat (usec) : 1000=0.10% 00:17:18.575 lat (msec) : 4=1.37%, 10=29.12%, 20=51.05%, 50=16.96%, 100=1.40% 00:17:18.575 cpu : usr=3.19%, sys=4.18%, ctx=378, majf=0, minf=1 00:17:18.575 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:18.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:18.575 issued rwts: total=3785,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.575 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:18.575 job1: (groupid=0, jobs=1): err= 0: pid=1063177: Fri Apr 26 14:54:00 2024 00:17:18.575 read: IOPS=6365, BW=24.9MiB/s (26.1MB/s)(25.0MiB/1005msec) 00:17:18.575 slat (nsec): min=923, max=14688k, avg=73394.54, stdev=597819.70 00:17:18.575 clat (usec): min=3215, max=38646, avg=9469.93, stdev=4774.70 00:17:18.575 lat (usec): min=3243, max=38674, avg=9543.32, stdev=4829.99 00:17:18.575 clat percentiles (usec): 00:17:18.575 | 1.00th=[ 3720], 5.00th=[ 5669], 10.00th=[ 5932], 20.00th=[ 6390], 00:17:18.575 | 30.00th=[ 6980], 40.00th=[ 7373], 50.00th=[ 7898], 60.00th=[ 8586], 00:17:18.575 | 70.00th=[ 9241], 80.00th=[11469], 90.00th=[14877], 95.00th=[20579], 00:17:18.575 | 99.00th=[26870], 99.50th=[31065], 99.90th=[33817], 99.95th=[34341], 00:17:18.575 | 99.99th=[38536] 00:17:18.575 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec); 0 zone resets 00:17:18.575 slat (nsec): min=1588, max=15695k, avg=74455.17, stdev=515237.23 00:17:18.575 clat (usec): min=1111, max=48957, avg=10047.23, stdev=8181.95 00:17:18.575 lat (usec): min=1121, max=48963, avg=10121.68, stdev=8231.16 00:17:18.575 clat percentiles (usec): 00:17:18.575 | 1.00th=[ 3490], 5.00th=[ 4015], 10.00th=[ 4178], 20.00th=[ 4555], 00:17:18.575 | 30.00th=[ 6063], 40.00th=[ 6915], 50.00th=[ 7177], 60.00th=[ 7504], 00:17:18.575 | 70.00th=[ 9372], 80.00th=[11994], 90.00th=[19006], 95.00th=[26608], 00:17:18.575 | 99.00th=[45876], 99.50th=[46924], 99.90th=[49021], 99.95th=[49021], 00:17:18.575 | 99.99th=[49021] 00:17:18.575 bw ( KiB/s): min=20480, max=32768, per=28.49%, avg=26624.00, stdev=8688.93, samples=2 00:17:18.575 iops : min= 5120, max= 8192, avg=6656.00, stdev=2172.23, samples=2 00:17:18.575 lat (msec) : 2=0.06%, 4=2.84%, 10=71.35%, 20=18.49%, 50=7.26% 00:17:18.575 cpu : usr=4.88%, sys=6.47%, ctx=457, majf=0, minf=1 00:17:18.575 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:17:18.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:18.575 issued rwts: total=6397,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.575 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:18.575 job2: (groupid=0, jobs=1): err= 0: pid=1063194: Fri Apr 26 14:54:00 2024 00:17:18.575 read: IOPS=6728, BW=26.3MiB/s (27.6MB/s)(26.5MiB/1008msec) 00:17:18.575 slat (nsec): min=947, max=11055k, avg=76694.43, stdev=568153.13 00:17:18.575 clat (usec): min=1435, max=39695, avg=9919.64, stdev=4858.34 00:17:18.575 lat (usec): min=2690, max=39703, avg=9996.33, stdev=4896.78 00:17:18.575 clat percentiles (usec): 00:17:18.575 | 1.00th=[ 4883], 5.00th=[ 6456], 10.00th=[ 6652], 20.00th=[ 6980], 00:17:18.575 | 30.00th=[ 7177], 40.00th=[ 7767], 50.00th=[ 8356], 60.00th=[ 9503], 00:17:18.575 | 70.00th=[10290], 80.00th=[11338], 90.00th=[14746], 95.00th=[18220], 00:17:18.575 | 99.00th=[33424], 99.50th=[35914], 99.90th=[39060], 99.95th=[39584], 00:17:18.575 | 99.99th=[39584] 00:17:18.575 write: IOPS=7111, BW=27.8MiB/s (29.1MB/s)(28.0MiB/1008msec); 0 zone resets 00:17:18.575 slat (nsec): min=1607, max=9775.5k, avg=62531.96, stdev=416906.88 00:17:18.575 clat (usec): min=1125, max=39668, avg=8452.82, stdev=3896.53 00:17:18.575 lat (usec): min=1135, max=39670, avg=8515.35, stdev=3917.61 00:17:18.575 clat percentiles (usec): 00:17:18.575 | 1.00th=[ 2802], 5.00th=[ 3982], 10.00th=[ 4817], 20.00th=[ 5473], 00:17:18.575 | 30.00th=[ 6259], 40.00th=[ 6849], 50.00th=[ 7504], 60.00th=[ 8455], 00:17:18.575 | 70.00th=[ 8979], 80.00th=[11338], 90.00th=[15139], 95.00th=[15795], 00:17:18.575 | 99.00th=[21365], 99.50th=[25297], 99.90th=[31851], 99.95th=[31851], 00:17:18.575 | 99.99th=[39584] 00:17:18.575 bw ( KiB/s): min=22576, max=34752, per=30.67%, avg=28664.00, stdev=8609.73, samples=2 00:17:18.575 iops : min= 5644, max= 8688, avg=7166.00, stdev=2152.43, samples=2 00:17:18.575 lat (msec) : 2=0.07%, 4=2.89%, 10=69.64%, 20=24.59%, 50=2.80% 00:17:18.575 cpu : usr=5.96%, sys=6.55%, ctx=510, majf=0, minf=1 00:17:18.575 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:17:18.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:18.575 issued rwts: total=6782,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.575 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:18.575 job3: (groupid=0, jobs=1): err= 0: pid=1063201: Fri Apr 26 14:54:00 2024 00:17:18.575 read: IOPS=5316, BW=20.8MiB/s (21.8MB/s)(20.9MiB/1008msec) 00:17:18.575 slat (nsec): min=895, max=16246k, avg=92349.71, stdev=731561.00 00:17:18.575 clat (usec): min=2186, max=41174, avg=12188.27, stdev=5659.82 00:17:18.575 lat (usec): min=3283, max=41177, avg=12280.62, stdev=5723.77 00:17:18.575 clat percentiles (usec): 00:17:18.575 | 1.00th=[ 3884], 5.00th=[ 6194], 10.00th=[ 6849], 20.00th=[ 7832], 00:17:18.575 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[ 9896], 60.00th=[11600], 00:17:18.575 | 70.00th=[14222], 80.00th=[16712], 90.00th=[19530], 95.00th=[22938], 00:17:18.575 | 99.00th=[28705], 99.50th=[34341], 99.90th=[39584], 99.95th=[41157], 00:17:18.575 | 99.99th=[41157] 00:17:18.575 write: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec); 0 zone resets 00:17:18.575 slat (nsec): min=1641, max=16281k, avg=73282.04, stdev=566437.36 00:17:18.575 clat (usec): min=602, max=43732, avg=11103.49, stdev=7569.66 00:17:18.575 lat (usec): min=696, max=43735, avg=11176.78, stdev=7619.74 00:17:18.575 clat percentiles (usec): 00:17:18.575 | 1.00th=[ 1303], 5.00th=[ 3425], 10.00th=[ 4621], 20.00th=[ 5604], 00:17:18.575 | 30.00th=[ 6390], 40.00th=[ 7963], 50.00th=[ 8717], 60.00th=[10552], 00:17:18.575 | 70.00th=[11863], 80.00th=[15664], 90.00th=[18744], 95.00th=[29754], 00:17:18.575 | 99.00th=[38536], 99.50th=[39060], 99.90th=[43779], 99.95th=[43779], 00:17:18.575 | 99.99th=[43779] 00:17:18.575 bw ( KiB/s): min=16864, max=28192, per=24.10%, avg=22528.00, stdev=8010.11, samples=2 00:17:18.575 iops : min= 4216, max= 7048, avg=5632.00, stdev=2002.53, samples=2 00:17:18.575 lat (usec) : 750=0.09%, 1000=0.14% 00:17:18.575 lat (msec) : 2=0.66%, 4=3.08%, 10=50.30%, 20=36.81%, 50=8.92% 00:17:18.576 cpu : usr=3.48%, sys=7.15%, ctx=396, majf=0, minf=1 00:17:18.576 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:18.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:18.576 issued rwts: total=5359,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.576 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:18.576 00:17:18.576 Run status group 0 (all jobs): 00:17:18.576 READ: bw=86.5MiB/s (90.7MB/s), 14.7MiB/s-26.3MiB/s (15.4MB/s-27.6MB/s), io=87.2MiB (91.4MB), run=1005-1008msec 00:17:18.576 WRITE: bw=91.3MiB/s (95.7MB/s), 15.9MiB/s-27.8MiB/s (16.7MB/s-29.1MB/s), io=92.0MiB (96.5MB), run=1005-1008msec 00:17:18.576 00:17:18.576 Disk stats (read/write): 00:17:18.576 nvme0n1: ios=3115/3343, merge=0/0, ticks=43733/53463, in_queue=97196, util=98.00% 00:17:18.576 nvme0n2: ios=4990/5120, merge=0/0, ticks=46872/54773, in_queue=101645, util=88.16% 00:17:18.576 nvme0n3: ios=6188/6358, merge=0/0, ticks=53932/45914, in_queue=99846, util=91.66% 00:17:18.576 nvme0n4: ios=4113/4484, merge=0/0, ticks=51483/51393, in_queue=102876, util=96.90% 00:17:18.576 14:54:00 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:18.576 [global] 00:17:18.576 thread=1 00:17:18.576 invalidate=1 00:17:18.576 rw=randwrite 00:17:18.576 time_based=1 00:17:18.576 runtime=1 00:17:18.576 ioengine=libaio 00:17:18.576 direct=1 00:17:18.576 bs=4096 00:17:18.576 iodepth=128 00:17:18.576 norandommap=0 00:17:18.576 numjobs=1 00:17:18.576 00:17:18.576 verify_dump=1 00:17:18.576 verify_backlog=512 00:17:18.576 verify_state_save=0 00:17:18.576 do_verify=1 00:17:18.576 verify=crc32c-intel 00:17:18.576 [job0] 00:17:18.576 filename=/dev/nvme0n1 00:17:18.576 [job1] 00:17:18.576 filename=/dev/nvme0n2 00:17:18.576 [job2] 00:17:18.576 filename=/dev/nvme0n3 00:17:18.576 [job3] 00:17:18.576 filename=/dev/nvme0n4 00:17:18.576 Could not set queue depth (nvme0n1) 00:17:18.576 Could not set queue depth (nvme0n2) 00:17:18.576 Could not set queue depth (nvme0n3) 00:17:18.576 Could not set queue depth (nvme0n4) 00:17:18.576 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:18.576 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:18.576 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:18.576 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:18.576 fio-3.35 00:17:18.576 Starting 4 threads 00:17:19.977 00:17:19.977 job0: (groupid=0, jobs=1): err= 0: pid=1063719: Fri Apr 26 14:54:02 2024 00:17:19.977 read: IOPS=8608, BW=33.6MiB/s (35.3MB/s)(33.7MiB/1003msec) 00:17:19.977 slat (nsec): min=868, max=3846.3k, avg=58671.78, stdev=366382.70 00:17:19.977 clat (usec): min=711, max=11637, avg=7406.71, stdev=895.10 00:17:19.977 lat (usec): min=3825, max=11639, avg=7465.38, stdev=935.23 00:17:19.977 clat percentiles (usec): 00:17:19.977 | 1.00th=[ 4948], 5.00th=[ 5932], 10.00th=[ 6456], 20.00th=[ 7046], 00:17:19.977 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7439], 00:17:19.977 | 70.00th=[ 7635], 80.00th=[ 7767], 90.00th=[ 8455], 95.00th=[ 9110], 00:17:19.977 | 99.00th=[10159], 99.50th=[10421], 99.90th=[10945], 99.95th=[11076], 00:17:19.977 | 99.99th=[11600] 00:17:19.977 write: IOPS=8677, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1003msec); 0 zone resets 00:17:19.977 slat (nsec): min=1444, max=6805.9k, avg=52923.69, stdev=271879.52 00:17:19.977 clat (usec): min=2161, max=14742, avg=7242.63, stdev=1115.78 00:17:19.977 lat (usec): min=2173, max=14750, avg=7295.56, stdev=1128.51 00:17:19.977 clat percentiles (usec): 00:17:19.977 | 1.00th=[ 4228], 5.00th=[ 5735], 10.00th=[ 6521], 20.00th=[ 6849], 00:17:19.977 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7177], 60.00th=[ 7308], 00:17:19.977 | 70.00th=[ 7373], 80.00th=[ 7570], 90.00th=[ 8029], 95.00th=[ 9110], 00:17:19.977 | 99.00th=[10552], 99.50th=[14091], 99.90th=[14615], 99.95th=[14746], 00:17:19.977 | 99.99th=[14746] 00:17:19.977 bw ( KiB/s): min=34144, max=35488, per=37.25%, avg=34816.00, stdev=950.35, samples=2 00:17:19.977 iops : min= 8536, max= 8872, avg=8704.00, stdev=237.59, samples=2 00:17:19.977 lat (usec) : 750=0.01% 00:17:19.977 lat (msec) : 4=0.69%, 10=97.85%, 20=1.45% 00:17:19.977 cpu : usr=4.09%, sys=6.79%, ctx=999, majf=0, minf=1 00:17:19.977 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:19.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:19.977 issued rwts: total=8634,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.977 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:19.977 job1: (groupid=0, jobs=1): err= 0: pid=1063731: Fri Apr 26 14:54:02 2024 00:17:19.977 read: IOPS=4601, BW=18.0MiB/s (18.8MB/s)(18.1MiB/1008msec) 00:17:19.977 slat (nsec): min=924, max=9983.1k, avg=94289.23, stdev=626372.15 00:17:19.977 clat (usec): min=4721, max=33228, avg=11497.28, stdev=3937.13 00:17:19.977 lat (usec): min=4727, max=33236, avg=11591.57, stdev=3977.80 00:17:19.977 clat percentiles (usec): 00:17:19.977 | 1.00th=[ 6325], 5.00th=[ 7373], 10.00th=[ 8029], 20.00th=[ 8455], 00:17:19.977 | 30.00th=[ 9372], 40.00th=[10159], 50.00th=[10683], 60.00th=[10945], 00:17:19.977 | 70.00th=[11863], 80.00th=[13435], 90.00th=[16057], 95.00th=[19006], 00:17:19.977 | 99.00th=[26870], 99.50th=[28181], 99.90th=[33162], 99.95th=[33162], 00:17:19.977 | 99.99th=[33162] 00:17:19.977 write: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec); 0 zone resets 00:17:19.977 slat (nsec): min=1550, max=8721.5k, avg=105103.44, stdev=555941.19 00:17:19.977 clat (usec): min=1104, max=38778, avg=14565.45, stdev=8188.45 00:17:19.977 lat (usec): min=1113, max=38786, avg=14670.55, stdev=8239.03 00:17:19.977 clat percentiles (usec): 00:17:19.977 | 1.00th=[ 3752], 5.00th=[ 4424], 10.00th=[ 5473], 20.00th=[ 7504], 00:17:19.977 | 30.00th=[ 9503], 40.00th=[10552], 50.00th=[11994], 60.00th=[14746], 00:17:19.977 | 70.00th=[15926], 80.00th=[22414], 90.00th=[27132], 95.00th=[31327], 00:17:19.977 | 99.00th=[34866], 99.50th=[37487], 99.90th=[38536], 99.95th=[38536], 00:17:19.977 | 99.99th=[38536] 00:17:19.977 bw ( KiB/s): min=19840, max=20336, per=21.49%, avg=20088.00, stdev=350.72, samples=2 00:17:19.977 iops : min= 4960, max= 5084, avg=5022.00, stdev=87.68, samples=2 00:17:19.977 lat (msec) : 2=0.10%, 4=0.99%, 10=34.26%, 20=49.51%, 50=15.14% 00:17:19.977 cpu : usr=3.77%, sys=5.06%, ctx=445, majf=0, minf=1 00:17:19.977 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:19.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:19.977 issued rwts: total=4638,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.977 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:19.977 job2: (groupid=0, jobs=1): err= 0: pid=1063740: Fri Apr 26 14:54:02 2024 00:17:19.977 read: IOPS=3743, BW=14.6MiB/s (15.3MB/s)(14.7MiB/1004msec) 00:17:19.977 slat (nsec): min=891, max=17007k, avg=134464.72, stdev=933787.53 00:17:19.977 clat (usec): min=2046, max=50683, avg=16602.63, stdev=7943.12 00:17:19.977 lat (usec): min=4656, max=50693, avg=16737.10, stdev=8022.72 00:17:19.977 clat percentiles (usec): 00:17:19.977 | 1.00th=[ 5211], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[10552], 00:17:19.977 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11863], 60.00th=[15139], 00:17:19.977 | 70.00th=[21627], 80.00th=[24249], 90.00th=[28967], 95.00th=[31065], 00:17:19.977 | 99.00th=[40633], 99.50th=[40633], 99.90th=[41681], 99.95th=[44827], 00:17:19.977 | 99.99th=[50594] 00:17:19.977 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:17:19.977 slat (nsec): min=1530, max=13419k, avg=116067.29, stdev=791099.71 00:17:19.977 clat (usec): min=6059, max=40717, avg=15778.37, stdev=6484.99 00:17:19.977 lat (usec): min=6067, max=40719, avg=15894.43, stdev=6559.86 00:17:19.977 clat percentiles (usec): 00:17:19.977 | 1.00th=[ 7242], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503], 00:17:19.977 | 30.00th=[10159], 40.00th=[13960], 50.00th=[14877], 60.00th=[15270], 00:17:19.977 | 70.00th=[20055], 80.00th=[20841], 90.00th=[23462], 95.00th=[27395], 00:17:19.977 | 99.00th=[36439], 99.50th=[39060], 99.90th=[40633], 99.95th=[40633], 00:17:19.977 | 99.99th=[40633] 00:17:19.977 bw ( KiB/s): min=16384, max=16384, per=17.53%, avg=16384.00, stdev= 0.00, samples=2 00:17:19.977 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:17:19.978 lat (msec) : 4=0.01%, 10=19.11%, 20=49.43%, 50=31.44%, 100=0.01% 00:17:19.978 cpu : usr=1.89%, sys=5.28%, ctx=329, majf=0, minf=1 00:17:19.978 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:19.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:19.978 issued rwts: total=3758,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.978 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:19.978 job3: (groupid=0, jobs=1): err= 0: pid=1063748: Fri Apr 26 14:54:02 2024 00:17:19.978 read: IOPS=5483, BW=21.4MiB/s (22.5MB/s)(21.5MiB/1005msec) 00:17:19.978 slat (nsec): min=902, max=5597.6k, avg=95641.58, stdev=595167.66 00:17:19.978 clat (usec): min=1504, max=17385, avg=11609.74, stdev=1632.48 00:17:19.978 lat (usec): min=4347, max=17397, avg=11705.38, stdev=1687.36 00:17:19.978 clat percentiles (usec): 00:17:19.978 | 1.00th=[ 7373], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[11207], 00:17:19.978 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11600], 60.00th=[11731], 00:17:19.978 | 70.00th=[11863], 80.00th=[12125], 90.00th=[13698], 95.00th=[14877], 00:17:19.978 | 99.00th=[16188], 99.50th=[16712], 99.90th=[17171], 99.95th=[17171], 00:17:19.978 | 99.99th=[17433] 00:17:19.978 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:17:19.978 slat (nsec): min=1503, max=5099.7k, avg=80250.78, stdev=259839.20 00:17:19.978 clat (usec): min=4253, max=16410, avg=11212.49, stdev=1477.98 00:17:19.978 lat (usec): min=4262, max=16894, avg=11292.74, stdev=1483.98 00:17:19.978 clat percentiles (usec): 00:17:19.978 | 1.00th=[ 6521], 5.00th=[ 8356], 10.00th=[ 9896], 20.00th=[10683], 00:17:19.978 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11338], 00:17:19.978 | 70.00th=[11469], 80.00th=[11600], 90.00th=[12125], 95.00th=[14222], 00:17:19.978 | 99.00th=[15664], 99.50th=[15795], 99.90th=[15926], 99.95th=[16188], 00:17:19.978 | 99.99th=[16450] 00:17:19.978 bw ( KiB/s): min=21256, max=23800, per=24.10%, avg=22528.00, stdev=1798.88, samples=2 00:17:19.978 iops : min= 5314, max= 5950, avg=5632.00, stdev=449.72, samples=2 00:17:19.978 lat (msec) : 2=0.01%, 10=12.15%, 20=87.84% 00:17:19.978 cpu : usr=3.49%, sys=4.38%, ctx=830, majf=0, minf=1 00:17:19.978 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:19.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:19.978 issued rwts: total=5511,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.978 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:19.978 00:17:19.978 Run status group 0 (all jobs): 00:17:19.978 READ: bw=87.4MiB/s (91.6MB/s), 14.6MiB/s-33.6MiB/s (15.3MB/s-35.3MB/s), io=88.1MiB (92.3MB), run=1003-1008msec 00:17:19.978 WRITE: bw=91.3MiB/s (95.7MB/s), 15.9MiB/s-33.9MiB/s (16.7MB/s-35.5MB/s), io=92.0MiB (96.5MB), run=1003-1008msec 00:17:19.978 00:17:19.978 Disk stats (read/write): 00:17:19.978 nvme0n1: ios=7218/7235, merge=0/0, ticks=26106/24838, in_queue=50944, util=86.57% 00:17:19.978 nvme0n2: ios=4137/4303, merge=0/0, ticks=46230/55664, in_queue=101894, util=87.67% 00:17:19.978 nvme0n3: ios=3099/3175, merge=0/0, ticks=27650/23462, in_queue=51112, util=92.19% 00:17:19.978 nvme0n4: ios=4629/4698, merge=0/0, ticks=26729/24457, in_queue=51186, util=91.57% 00:17:19.978 14:54:02 -- target/fio.sh@55 -- # sync 00:17:19.978 14:54:02 -- target/fio.sh@59 -- # fio_pid=1063936 00:17:19.978 14:54:02 -- target/fio.sh@61 -- # sleep 3 00:17:19.978 14:54:02 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:19.978 [global] 00:17:19.978 thread=1 00:17:19.978 invalidate=1 00:17:19.978 rw=read 00:17:19.978 time_based=1 00:17:19.978 runtime=10 00:17:19.978 ioengine=libaio 00:17:19.978 direct=1 00:17:19.978 bs=4096 00:17:19.978 iodepth=1 00:17:19.978 norandommap=1 00:17:19.978 numjobs=1 00:17:19.978 00:17:19.978 [job0] 00:17:19.978 filename=/dev/nvme0n1 00:17:19.978 [job1] 00:17:19.978 filename=/dev/nvme0n2 00:17:19.978 [job2] 00:17:19.978 filename=/dev/nvme0n3 00:17:19.978 [job3] 00:17:19.978 filename=/dev/nvme0n4 00:17:19.978 Could not set queue depth (nvme0n1) 00:17:19.978 Could not set queue depth (nvme0n2) 00:17:19.978 Could not set queue depth (nvme0n3) 00:17:19.978 Could not set queue depth (nvme0n4) 00:17:20.238 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:20.238 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:20.238 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:20.238 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:20.238 fio-3.35 00:17:20.238 Starting 4 threads 00:17:22.783 14:54:05 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:23.043 14:54:05 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:23.043 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=253952, buflen=4096 00:17:23.043 fio: pid=1064252, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:23.304 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=1732608, buflen=4096 00:17:23.304 fio: pid=1064243, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:23.304 14:54:05 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:23.304 14:54:05 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:23.304 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=11067392, buflen=4096 00:17:23.304 fio: pid=1064203, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:23.304 14:54:05 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:23.304 14:54:05 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:23.565 14:54:06 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:23.565 14:54:06 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:23.565 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=1007616, buflen=4096 00:17:23.565 fio: pid=1064219, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:23.565 00:17:23.565 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1064203: Fri Apr 26 14:54:06 2024 00:17:23.565 read: IOPS=933, BW=3732KiB/s (3822kB/s)(10.6MiB/2896msec) 00:17:23.565 slat (usec): min=5, max=31913, avg=56.22, stdev=837.04 00:17:23.565 clat (usec): min=349, max=40980, avg=999.36, stdev=1341.92 00:17:23.565 lat (usec): min=411, max=41006, avg=1055.60, stdev=1583.34 00:17:23.565 clat percentiles (usec): 00:17:23.565 | 1.00th=[ 627], 5.00th=[ 775], 10.00th=[ 848], 20.00th=[ 906], 00:17:23.565 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 963], 60.00th=[ 979], 00:17:23.565 | 70.00th=[ 988], 80.00th=[ 1004], 90.00th=[ 1037], 95.00th=[ 1057], 00:17:23.565 | 99.00th=[ 1139], 99.50th=[ 1221], 99.90th=[41157], 99.95th=[41157], 00:17:23.565 | 99.99th=[41157] 00:17:23.565 bw ( KiB/s): min= 3144, max= 4040, per=86.32%, avg=3836.80, stdev=387.84, samples=5 00:17:23.565 iops : min= 786, max= 1010, avg=959.20, stdev=96.96, samples=5 00:17:23.565 lat (usec) : 500=0.41%, 750=3.00%, 1000=74.10% 00:17:23.565 lat (msec) : 2=22.20%, 4=0.04%, 10=0.11%, 50=0.11% 00:17:23.565 cpu : usr=2.38%, sys=2.97%, ctx=2707, majf=0, minf=1 00:17:23.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:23.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.565 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.565 issued rwts: total=2703,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:23.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:23.565 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1064219: Fri Apr 26 14:54:06 2024 00:17:23.565 read: IOPS=79, BW=318KiB/s (326kB/s)(984KiB/3090msec) 00:17:23.565 slat (usec): min=6, max=12686, avg=116.98, stdev=1047.15 00:17:23.565 clat (usec): min=482, max=42116, avg=12397.69, stdev=18297.76 00:17:23.565 lat (usec): min=507, max=54062, avg=12515.04, stdev=18478.47 00:17:23.565 clat percentiles (usec): 00:17:23.565 | 1.00th=[ 545], 5.00th=[ 660], 10.00th=[ 701], 20.00th=[ 791], 00:17:23.565 | 30.00th=[ 832], 40.00th=[ 881], 50.00th=[ 914], 60.00th=[ 947], 00:17:23.565 | 70.00th=[ 996], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:17:23.565 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:23.565 | 99.99th=[42206] 00:17:23.565 bw ( KiB/s): min= 96, max= 1352, per=7.31%, avg=325.33, stdev=504.56, samples=6 00:17:23.565 iops : min= 24, max= 338, avg=81.33, stdev=126.14, samples=6 00:17:23.565 lat (usec) : 500=0.40%, 750=16.19%, 1000=53.44% 00:17:23.565 lat (msec) : 2=0.81%, 10=0.40%, 50=28.34% 00:17:23.565 cpu : usr=0.10%, sys=0.29%, ctx=249, majf=0, minf=1 00:17:23.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:23.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.565 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.565 issued rwts: total=247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:23.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:23.565 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1064243: Fri Apr 26 14:54:06 2024 00:17:23.565 read: IOPS=155, BW=622KiB/s (637kB/s)(1692KiB/2721msec) 00:17:23.565 slat (usec): min=6, max=14914, avg=62.85, stdev=723.54 00:17:23.565 clat (usec): min=529, max=42082, avg=6312.58, stdev=13558.68 00:17:23.565 lat (usec): min=552, max=42107, avg=6375.51, stdev=13563.98 00:17:23.565 clat percentiles (usec): 00:17:23.565 | 1.00th=[ 799], 5.00th=[ 1020], 10.00th=[ 1057], 20.00th=[ 1090], 00:17:23.566 | 30.00th=[ 1123], 40.00th=[ 1139], 50.00th=[ 1156], 60.00th=[ 1172], 00:17:23.566 | 70.00th=[ 1188], 80.00th=[ 1221], 90.00th=[41157], 95.00th=[42206], 00:17:23.566 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:23.566 | 99.99th=[42206] 00:17:23.566 bw ( KiB/s): min= 96, max= 1424, per=14.22%, avg=632.00, stdev=561.48, samples=5 00:17:23.566 iops : min= 24, max= 356, avg=158.00, stdev=140.37, samples=5 00:17:23.566 lat (usec) : 750=0.47%, 1000=3.30% 00:17:23.566 lat (msec) : 2=83.25%, 50=12.74% 00:17:23.566 cpu : usr=0.33%, sys=0.59%, ctx=426, majf=0, minf=1 00:17:23.566 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:23.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.566 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.566 issued rwts: total=424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:23.566 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:23.566 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1064252: Fri Apr 26 14:54:06 2024 00:17:23.566 read: IOPS=24, BW=96.0KiB/s (98.4kB/s)(248KiB/2582msec) 00:17:23.566 slat (nsec): min=25142, max=58889, avg=26205.33, stdev=4199.14 00:17:23.566 clat (usec): min=971, max=42141, avg=41260.89, stdev=5205.87 00:17:23.566 lat (usec): min=1030, max=42167, avg=41287.11, stdev=5201.65 00:17:23.566 clat percentiles (usec): 00:17:23.566 | 1.00th=[ 971], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:17:23.566 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:17:23.566 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:23.566 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:23.566 | 99.99th=[42206] 00:17:23.566 bw ( KiB/s): min= 96, max= 96, per=2.16%, avg=96.00, stdev= 0.00, samples=5 00:17:23.566 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:17:23.566 lat (usec) : 1000=1.59% 00:17:23.566 lat (msec) : 50=96.83% 00:17:23.566 cpu : usr=0.15%, sys=0.00%, ctx=64, majf=0, minf=2 00:17:23.566 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:23.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.566 complete : 0=1.6%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.566 issued rwts: total=63,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:23.566 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:23.566 00:17:23.566 Run status group 0 (all jobs): 00:17:23.566 READ: bw=4444KiB/s (4551kB/s), 96.0KiB/s-3732KiB/s (98.4kB/s-3822kB/s), io=13.4MiB (14.1MB), run=2582-3090msec 00:17:23.566 00:17:23.566 Disk stats (read/write): 00:17:23.566 nvme0n1: ios=2656/0, merge=0/0, ticks=2524/0, in_queue=2524, util=92.09% 00:17:23.566 nvme0n2: ios=246/0, merge=0/0, ticks=3048/0, in_queue=3048, util=94.95% 00:17:23.566 nvme0n3: ios=408/0, merge=0/0, ticks=2537/0, in_queue=2537, util=95.96% 00:17:23.566 nvme0n4: ios=56/0, merge=0/0, ticks=2308/0, in_queue=2308, util=96.06% 00:17:23.827 14:54:06 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:23.827 14:54:06 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:23.827 14:54:06 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:23.827 14:54:06 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:24.088 14:54:06 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:24.088 14:54:06 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:24.349 14:54:06 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:24.349 14:54:06 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:24.349 14:54:06 -- target/fio.sh@69 -- # fio_status=0 00:17:24.349 14:54:06 -- target/fio.sh@70 -- # wait 1063936 00:17:24.349 14:54:06 -- target/fio.sh@70 -- # fio_status=4 00:17:24.349 14:54:06 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:24.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.612 14:54:07 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:24.612 14:54:07 -- common/autotest_common.sh@1205 -- # local i=0 00:17:24.612 14:54:07 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:24.612 14:54:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:24.612 14:54:07 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:24.612 14:54:07 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:24.612 14:54:07 -- common/autotest_common.sh@1217 -- # return 0 00:17:24.612 14:54:07 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:24.612 14:54:07 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:24.612 nvmf hotplug test: fio failed as expected 00:17:24.612 14:54:07 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:24.612 14:54:07 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:24.612 14:54:07 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:24.612 14:54:07 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:24.612 14:54:07 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:24.612 14:54:07 -- target/fio.sh@91 -- # nvmftestfini 00:17:24.612 14:54:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:24.612 14:54:07 -- nvmf/common.sh@117 -- # sync 00:17:24.612 14:54:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:24.612 14:54:07 -- nvmf/common.sh@120 -- # set +e 00:17:24.612 14:54:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:24.612 14:54:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:24.612 rmmod nvme_tcp 00:17:24.612 rmmod nvme_fabrics 00:17:24.873 rmmod nvme_keyring 00:17:24.873 14:54:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:24.873 14:54:07 -- nvmf/common.sh@124 -- # set -e 00:17:24.873 14:54:07 -- nvmf/common.sh@125 -- # return 0 00:17:24.873 14:54:07 -- nvmf/common.sh@478 -- # '[' -n 1060336 ']' 00:17:24.873 14:54:07 -- nvmf/common.sh@479 -- # killprocess 1060336 00:17:24.873 14:54:07 -- common/autotest_common.sh@936 -- # '[' -z 1060336 ']' 00:17:24.873 14:54:07 -- common/autotest_common.sh@940 -- # kill -0 1060336 00:17:24.873 14:54:07 -- common/autotest_common.sh@941 -- # uname 00:17:24.873 14:54:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:24.873 14:54:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1060336 00:17:24.873 14:54:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:24.873 14:54:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:24.873 14:54:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1060336' 00:17:24.873 killing process with pid 1060336 00:17:24.873 14:54:07 -- common/autotest_common.sh@955 -- # kill 1060336 00:17:24.873 14:54:07 -- common/autotest_common.sh@960 -- # wait 1060336 00:17:24.873 14:54:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:24.873 14:54:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:24.873 14:54:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:24.873 14:54:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:24.873 14:54:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:24.873 14:54:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.873 14:54:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.873 14:54:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.426 14:54:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:27.426 00:17:27.426 real 0m27.807s 00:17:27.426 user 2m32.640s 00:17:27.426 sys 0m8.419s 00:17:27.426 14:54:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:27.426 14:54:09 -- common/autotest_common.sh@10 -- # set +x 00:17:27.426 ************************************ 00:17:27.426 END TEST nvmf_fio_target 00:17:27.426 ************************************ 00:17:27.426 14:54:09 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:27.426 14:54:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:27.426 14:54:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:27.426 14:54:09 -- common/autotest_common.sh@10 -- # set +x 00:17:27.426 ************************************ 00:17:27.426 START TEST nvmf_bdevio 00:17:27.426 ************************************ 00:17:27.426 14:54:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:27.426 * Looking for test storage... 00:17:27.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:27.426 14:54:09 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:27.426 14:54:09 -- nvmf/common.sh@7 -- # uname -s 00:17:27.426 14:54:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.426 14:54:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.426 14:54:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.426 14:54:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.426 14:54:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.426 14:54:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.426 14:54:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.426 14:54:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.426 14:54:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.426 14:54:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.426 14:54:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:27.426 14:54:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:27.426 14:54:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.426 14:54:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.426 14:54:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:27.426 14:54:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.426 14:54:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:27.426 14:54:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.426 14:54:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.426 14:54:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.426 14:54:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.426 14:54:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.426 14:54:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.426 14:54:09 -- paths/export.sh@5 -- # export PATH 00:17:27.426 14:54:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.426 14:54:09 -- nvmf/common.sh@47 -- # : 0 00:17:27.426 14:54:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:27.426 14:54:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:27.426 14:54:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.426 14:54:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.426 14:54:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.426 14:54:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:27.426 14:54:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:27.426 14:54:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:27.426 14:54:09 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:27.426 14:54:09 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:27.426 14:54:09 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:27.426 14:54:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:27.426 14:54:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.426 14:54:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:27.426 14:54:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:27.426 14:54:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:27.426 14:54:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.426 14:54:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:27.426 14:54:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.426 14:54:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:27.426 14:54:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:27.426 14:54:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:27.426 14:54:09 -- common/autotest_common.sh@10 -- # set +x 00:17:35.571 14:54:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:35.571 14:54:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:35.571 14:54:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:35.571 14:54:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:35.571 14:54:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:35.571 14:54:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:35.571 14:54:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:35.571 14:54:16 -- nvmf/common.sh@295 -- # net_devs=() 00:17:35.571 14:54:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:35.571 14:54:16 -- nvmf/common.sh@296 -- # e810=() 00:17:35.571 14:54:16 -- nvmf/common.sh@296 -- # local -ga e810 00:17:35.571 14:54:16 -- nvmf/common.sh@297 -- # x722=() 00:17:35.571 14:54:16 -- nvmf/common.sh@297 -- # local -ga x722 00:17:35.571 14:54:16 -- nvmf/common.sh@298 -- # mlx=() 00:17:35.571 14:54:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:35.571 14:54:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:35.571 14:54:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:35.571 14:54:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:35.571 14:54:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:35.571 14:54:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:35.571 14:54:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:35.571 14:54:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:35.571 14:54:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:35.571 14:54:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:35.571 14:54:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:35.571 14:54:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:35.571 14:54:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:35.571 14:54:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:35.571 14:54:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:35.571 14:54:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:35.571 14:54:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:35.571 14:54:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:35.571 14:54:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:35.571 14:54:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:35.571 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:35.571 14:54:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:35.571 14:54:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:35.571 14:54:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.571 14:54:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.571 14:54:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:35.571 14:54:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:35.571 14:54:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:35.571 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:35.571 14:54:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:35.571 14:54:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:35.571 14:54:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.571 14:54:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.571 14:54:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:35.571 14:54:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:35.571 14:54:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:35.571 14:54:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:35.572 14:54:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:35.572 14:54:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.572 14:54:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:35.572 14:54:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.572 14:54:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:35.572 Found net devices under 0000:31:00.0: cvl_0_0 00:17:35.572 14:54:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.572 14:54:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:35.572 14:54:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.572 14:54:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:35.572 14:54:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.572 14:54:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:35.572 Found net devices under 0000:31:00.1: cvl_0_1 00:17:35.572 14:54:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.572 14:54:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:35.572 14:54:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:35.572 14:54:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:35.572 14:54:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:35.572 14:54:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:35.572 14:54:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:35.572 14:54:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:35.572 14:54:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:35.572 14:54:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:35.572 14:54:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:35.572 14:54:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:35.572 14:54:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:35.572 14:54:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:35.572 14:54:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:35.572 14:54:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:35.572 14:54:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:35.572 14:54:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:35.572 14:54:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:35.572 14:54:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:35.572 14:54:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:35.572 14:54:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:35.572 14:54:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:35.572 14:54:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:35.572 14:54:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:35.572 14:54:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:35.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.782 ms 00:17:35.572 00:17:35.572 --- 10.0.0.2 ping statistics --- 00:17:35.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.572 rtt min/avg/max/mdev = 0.782/0.782/0.782/0.000 ms 00:17:35.572 14:54:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:35.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:17:35.572 00:17:35.572 --- 10.0.0.1 ping statistics --- 00:17:35.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.572 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:17:35.572 14:54:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.572 14:54:17 -- nvmf/common.sh@411 -- # return 0 00:17:35.572 14:54:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:35.572 14:54:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.572 14:54:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:35.572 14:54:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:35.572 14:54:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.572 14:54:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:35.572 14:54:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:35.572 14:54:17 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:35.572 14:54:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:35.572 14:54:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:35.572 14:54:17 -- common/autotest_common.sh@10 -- # set +x 00:17:35.572 14:54:17 -- nvmf/common.sh@470 -- # nvmfpid=1069869 00:17:35.572 14:54:17 -- nvmf/common.sh@471 -- # waitforlisten 1069869 00:17:35.572 14:54:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:35.572 14:54:17 -- common/autotest_common.sh@817 -- # '[' -z 1069869 ']' 00:17:35.572 14:54:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.572 14:54:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:35.572 14:54:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.572 14:54:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:35.572 14:54:17 -- common/autotest_common.sh@10 -- # set +x 00:17:35.572 [2024-04-26 14:54:17.375102] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:17:35.572 [2024-04-26 14:54:17.375164] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.572 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.572 [2024-04-26 14:54:17.462921] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:35.572 [2024-04-26 14:54:17.553457] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.572 [2024-04-26 14:54:17.553522] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.572 [2024-04-26 14:54:17.553530] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.572 [2024-04-26 14:54:17.553537] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.572 [2024-04-26 14:54:17.553544] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.572 [2024-04-26 14:54:17.553720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:35.572 [2024-04-26 14:54:17.554257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:35.572 [2024-04-26 14:54:17.554468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:35.572 [2024-04-26 14:54:17.554470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:35.572 14:54:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:35.572 14:54:18 -- common/autotest_common.sh@850 -- # return 0 00:17:35.572 14:54:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:35.572 14:54:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:35.572 14:54:18 -- common/autotest_common.sh@10 -- # set +x 00:17:35.572 14:54:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.572 14:54:18 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:35.572 14:54:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.572 14:54:18 -- common/autotest_common.sh@10 -- # set +x 00:17:35.572 [2024-04-26 14:54:18.213052] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:35.572 14:54:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.572 14:54:18 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:35.572 14:54:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.572 14:54:18 -- common/autotest_common.sh@10 -- # set +x 00:17:35.834 Malloc0 00:17:35.834 14:54:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.834 14:54:18 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:35.834 14:54:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.834 14:54:18 -- common/autotest_common.sh@10 -- # set +x 00:17:35.834 14:54:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.834 14:54:18 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:35.834 14:54:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.834 14:54:18 -- common/autotest_common.sh@10 -- # set +x 00:17:35.834 14:54:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.834 14:54:18 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:35.834 14:54:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.834 14:54:18 -- common/autotest_common.sh@10 -- # set +x 00:17:35.834 [2024-04-26 14:54:18.278273] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:35.834 14:54:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.834 14:54:18 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:35.834 14:54:18 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:35.834 14:54:18 -- nvmf/common.sh@521 -- # config=() 00:17:35.834 14:54:18 -- nvmf/common.sh@521 -- # local subsystem config 00:17:35.834 14:54:18 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:35.834 14:54:18 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:35.834 { 00:17:35.834 "params": { 00:17:35.834 "name": "Nvme$subsystem", 00:17:35.834 "trtype": "$TEST_TRANSPORT", 00:17:35.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:35.834 "adrfam": "ipv4", 00:17:35.834 "trsvcid": "$NVMF_PORT", 00:17:35.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:35.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:35.834 "hdgst": ${hdgst:-false}, 00:17:35.834 "ddgst": ${ddgst:-false} 00:17:35.834 }, 00:17:35.834 "method": "bdev_nvme_attach_controller" 00:17:35.834 } 00:17:35.834 EOF 00:17:35.834 )") 00:17:35.834 14:54:18 -- nvmf/common.sh@543 -- # cat 00:17:35.834 14:54:18 -- nvmf/common.sh@545 -- # jq . 00:17:35.834 14:54:18 -- nvmf/common.sh@546 -- # IFS=, 00:17:35.834 14:54:18 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:35.834 "params": { 00:17:35.834 "name": "Nvme1", 00:17:35.834 "trtype": "tcp", 00:17:35.834 "traddr": "10.0.0.2", 00:17:35.834 "adrfam": "ipv4", 00:17:35.834 "trsvcid": "4420", 00:17:35.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.834 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:35.834 "hdgst": false, 00:17:35.834 "ddgst": false 00:17:35.834 }, 00:17:35.834 "method": "bdev_nvme_attach_controller" 00:17:35.834 }' 00:17:35.834 [2024-04-26 14:54:18.340171] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:17:35.834 [2024-04-26 14:54:18.340278] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1070043 ] 00:17:35.834 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.834 [2024-04-26 14:54:18.409470] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:35.834 [2024-04-26 14:54:18.482254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.834 [2024-04-26 14:54:18.482372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.834 [2024-04-26 14:54:18.482375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.095 I/O targets: 00:17:36.095 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:36.095 00:17:36.095 00:17:36.096 CUnit - A unit testing framework for C - Version 2.1-3 00:17:36.096 http://cunit.sourceforge.net/ 00:17:36.096 00:17:36.096 00:17:36.096 Suite: bdevio tests on: Nvme1n1 00:17:36.096 Test: blockdev write read block ...passed 00:17:36.096 Test: blockdev write zeroes read block ...passed 00:17:36.096 Test: blockdev write zeroes read no split ...passed 00:17:36.356 Test: blockdev write zeroes read split ...passed 00:17:36.356 Test: blockdev write zeroes read split partial ...passed 00:17:36.356 Test: blockdev reset ...[2024-04-26 14:54:18.840247] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:36.356 [2024-04-26 14:54:18.840311] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20df8f0 (9): Bad file descriptor 00:17:36.356 [2024-04-26 14:54:18.988462] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:36.356 passed 00:17:36.617 Test: blockdev write read 8 blocks ...passed 00:17:36.617 Test: blockdev write read size > 128k ...passed 00:17:36.617 Test: blockdev write read invalid size ...passed 00:17:36.617 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:36.617 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:36.617 Test: blockdev write read max offset ...passed 00:17:36.617 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:36.617 Test: blockdev writev readv 8 blocks ...passed 00:17:36.617 Test: blockdev writev readv 30 x 1block ...passed 00:17:36.617 Test: blockdev writev readv block ...passed 00:17:36.617 Test: blockdev writev readv size > 128k ...passed 00:17:36.617 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:36.617 Test: blockdev comparev and writev ...[2024-04-26 14:54:19.254745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.617 [2024-04-26 14:54:19.254771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.617 [2024-04-26 14:54:19.254783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.617 [2024-04-26 14:54:19.254789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:36.617 [2024-04-26 14:54:19.255290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.617 [2024-04-26 14:54:19.255299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:36.617 [2024-04-26 14:54:19.255310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.617 [2024-04-26 14:54:19.255316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:36.617 [2024-04-26 14:54:19.255813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.617 [2024-04-26 14:54:19.255821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:36.617 [2024-04-26 14:54:19.255831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.617 [2024-04-26 14:54:19.255840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:36.617 [2024-04-26 14:54:19.256372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.617 [2024-04-26 14:54:19.256380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:36.617 [2024-04-26 14:54:19.256389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.617 [2024-04-26 14:54:19.256395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:36.877 passed 00:17:36.877 Test: blockdev nvme passthru rw ...passed 00:17:36.877 Test: blockdev nvme passthru vendor specific ...[2024-04-26 14:54:19.341723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:36.877 [2024-04-26 14:54:19.341735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:36.877 [2024-04-26 14:54:19.342124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:36.877 [2024-04-26 14:54:19.342132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:36.877 [2024-04-26 14:54:19.342501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:36.877 [2024-04-26 14:54:19.342515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:36.877 [2024-04-26 14:54:19.342864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:36.877 [2024-04-26 14:54:19.342871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:36.877 passed 00:17:36.877 Test: blockdev nvme admin passthru ...passed 00:17:36.877 Test: blockdev copy ...passed 00:17:36.877 00:17:36.877 Run Summary: Type Total Ran Passed Failed Inactive 00:17:36.877 suites 1 1 n/a 0 0 00:17:36.877 tests 23 23 23 0 0 00:17:36.877 asserts 152 152 152 0 n/a 00:17:36.877 00:17:36.877 Elapsed time = 1.552 seconds 00:17:36.878 14:54:19 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:36.878 14:54:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.878 14:54:19 -- common/autotest_common.sh@10 -- # set +x 00:17:36.878 14:54:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.878 14:54:19 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:36.878 14:54:19 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:36.878 14:54:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:36.878 14:54:19 -- nvmf/common.sh@117 -- # sync 00:17:36.878 14:54:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:36.878 14:54:19 -- nvmf/common.sh@120 -- # set +e 00:17:36.878 14:54:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:36.878 14:54:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:37.138 rmmod nvme_tcp 00:17:37.138 rmmod nvme_fabrics 00:17:37.138 rmmod nvme_keyring 00:17:37.138 14:54:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:37.138 14:54:19 -- nvmf/common.sh@124 -- # set -e 00:17:37.138 14:54:19 -- nvmf/common.sh@125 -- # return 0 00:17:37.138 14:54:19 -- nvmf/common.sh@478 -- # '[' -n 1069869 ']' 00:17:37.138 14:54:19 -- nvmf/common.sh@479 -- # killprocess 1069869 00:17:37.138 14:54:19 -- common/autotest_common.sh@936 -- # '[' -z 1069869 ']' 00:17:37.138 14:54:19 -- common/autotest_common.sh@940 -- # kill -0 1069869 00:17:37.138 14:54:19 -- common/autotest_common.sh@941 -- # uname 00:17:37.138 14:54:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:37.138 14:54:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1069869 00:17:37.138 14:54:19 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:37.138 14:54:19 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:37.138 14:54:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1069869' 00:17:37.138 killing process with pid 1069869 00:17:37.138 14:54:19 -- common/autotest_common.sh@955 -- # kill 1069869 00:17:37.139 14:54:19 -- common/autotest_common.sh@960 -- # wait 1069869 00:17:37.139 14:54:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:37.139 14:54:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:37.139 14:54:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:37.139 14:54:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:37.139 14:54:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:37.139 14:54:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.139 14:54:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:37.139 14:54:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.685 14:54:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:39.685 00:17:39.685 real 0m12.091s 00:17:39.685 user 0m13.460s 00:17:39.685 sys 0m5.960s 00:17:39.685 14:54:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:39.685 14:54:21 -- common/autotest_common.sh@10 -- # set +x 00:17:39.685 ************************************ 00:17:39.685 END TEST nvmf_bdevio 00:17:39.685 ************************************ 00:17:39.685 14:54:21 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:17:39.685 14:54:21 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:39.685 14:54:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:39.685 14:54:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:39.685 14:54:21 -- common/autotest_common.sh@10 -- # set +x 00:17:39.685 ************************************ 00:17:39.685 START TEST nvmf_bdevio_no_huge 00:17:39.685 ************************************ 00:17:39.685 14:54:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:39.685 * Looking for test storage... 00:17:39.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:39.685 14:54:22 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:39.685 14:54:22 -- nvmf/common.sh@7 -- # uname -s 00:17:39.685 14:54:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.685 14:54:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.685 14:54:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.685 14:54:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.685 14:54:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.685 14:54:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.685 14:54:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.685 14:54:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.685 14:54:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.685 14:54:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.685 14:54:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:39.685 14:54:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:39.685 14:54:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.685 14:54:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.685 14:54:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:39.685 14:54:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:39.685 14:54:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:39.685 14:54:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.685 14:54:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.685 14:54:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.685 14:54:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.685 14:54:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.685 14:54:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.685 14:54:22 -- paths/export.sh@5 -- # export PATH 00:17:39.686 14:54:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.686 14:54:22 -- nvmf/common.sh@47 -- # : 0 00:17:39.686 14:54:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:39.686 14:54:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:39.686 14:54:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:39.686 14:54:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.686 14:54:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.686 14:54:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:39.686 14:54:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:39.686 14:54:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:39.686 14:54:22 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:39.686 14:54:22 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:39.686 14:54:22 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:39.686 14:54:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:39.686 14:54:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:39.686 14:54:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:39.686 14:54:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:39.686 14:54:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:39.686 14:54:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.686 14:54:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:39.686 14:54:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.686 14:54:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:39.686 14:54:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:39.686 14:54:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:39.686 14:54:22 -- common/autotest_common.sh@10 -- # set +x 00:17:46.275 14:54:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:46.275 14:54:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:46.275 14:54:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:46.275 14:54:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:46.275 14:54:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:46.275 14:54:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:46.275 14:54:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:46.275 14:54:28 -- nvmf/common.sh@295 -- # net_devs=() 00:17:46.275 14:54:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:46.275 14:54:28 -- nvmf/common.sh@296 -- # e810=() 00:17:46.275 14:54:28 -- nvmf/common.sh@296 -- # local -ga e810 00:17:46.275 14:54:28 -- nvmf/common.sh@297 -- # x722=() 00:17:46.275 14:54:28 -- nvmf/common.sh@297 -- # local -ga x722 00:17:46.275 14:54:28 -- nvmf/common.sh@298 -- # mlx=() 00:17:46.275 14:54:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:46.275 14:54:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:46.275 14:54:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:46.275 14:54:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:46.275 14:54:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:46.275 14:54:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:46.275 14:54:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:46.275 14:54:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:46.275 14:54:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:46.275 14:54:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:46.275 14:54:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:46.275 14:54:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:46.275 14:54:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:46.275 14:54:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:46.275 14:54:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:46.275 14:54:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:46.275 14:54:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:46.275 14:54:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:46.275 14:54:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.275 14:54:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:46.275 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:46.275 14:54:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:46.275 14:54:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:46.275 14:54:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.275 14:54:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.275 14:54:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:46.275 14:54:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.275 14:54:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:46.275 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:46.275 14:54:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:46.275 14:54:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:46.275 14:54:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.275 14:54:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.275 14:54:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:46.275 14:54:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:46.275 14:54:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:46.275 14:54:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:46.276 14:54:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.276 14:54:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.276 14:54:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:46.276 14:54:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.276 14:54:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:46.276 Found net devices under 0000:31:00.0: cvl_0_0 00:17:46.276 14:54:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.276 14:54:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.276 14:54:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.276 14:54:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:46.276 14:54:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.276 14:54:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:46.276 Found net devices under 0000:31:00.1: cvl_0_1 00:17:46.276 14:54:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.276 14:54:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:46.276 14:54:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:46.276 14:54:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:46.276 14:54:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:46.276 14:54:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:46.276 14:54:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.276 14:54:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.276 14:54:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:46.276 14:54:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:46.276 14:54:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:46.276 14:54:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:46.276 14:54:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:46.276 14:54:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:46.276 14:54:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.276 14:54:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:46.276 14:54:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:46.276 14:54:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:46.276 14:54:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:46.537 14:54:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:46.537 14:54:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:46.537 14:54:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:46.537 14:54:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:46.537 14:54:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:46.537 14:54:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:46.537 14:54:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:46.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:17:46.537 00:17:46.537 --- 10.0.0.2 ping statistics --- 00:17:46.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.537 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:17:46.537 14:54:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:46.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:17:46.537 00:17:46.537 --- 10.0.0.1 ping statistics --- 00:17:46.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.537 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:17:46.537 14:54:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.537 14:54:29 -- nvmf/common.sh@411 -- # return 0 00:17:46.537 14:54:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:46.537 14:54:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.537 14:54:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:46.537 14:54:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:46.537 14:54:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.537 14:54:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:46.537 14:54:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:46.537 14:54:29 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:46.537 14:54:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:46.537 14:54:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:46.537 14:54:29 -- common/autotest_common.sh@10 -- # set +x 00:17:46.799 14:54:29 -- nvmf/common.sh@470 -- # nvmfpid=1074484 00:17:46.799 14:54:29 -- nvmf/common.sh@471 -- # waitforlisten 1074484 00:17:46.799 14:54:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:46.799 14:54:29 -- common/autotest_common.sh@817 -- # '[' -z 1074484 ']' 00:17:46.799 14:54:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.799 14:54:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:46.799 14:54:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.799 14:54:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:46.799 14:54:29 -- common/autotest_common.sh@10 -- # set +x 00:17:46.799 [2024-04-26 14:54:29.260677] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:17:46.799 [2024-04-26 14:54:29.260745] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:46.799 [2024-04-26 14:54:29.357491] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:46.799 [2024-04-26 14:54:29.459556] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.799 [2024-04-26 14:54:29.459608] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.799 [2024-04-26 14:54:29.459616] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.799 [2024-04-26 14:54:29.459623] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.799 [2024-04-26 14:54:29.459629] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.799 [2024-04-26 14:54:29.459827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:46.799 [2024-04-26 14:54:29.459972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:46.799 [2024-04-26 14:54:29.460268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:46.799 [2024-04-26 14:54:29.460270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:47.744 14:54:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:47.744 14:54:30 -- common/autotest_common.sh@850 -- # return 0 00:17:47.744 14:54:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:47.744 14:54:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:47.744 14:54:30 -- common/autotest_common.sh@10 -- # set +x 00:17:47.744 14:54:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.744 14:54:30 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:47.744 14:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.744 14:54:30 -- common/autotest_common.sh@10 -- # set +x 00:17:47.744 [2024-04-26 14:54:30.106219] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:47.744 14:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.744 14:54:30 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:47.744 14:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.744 14:54:30 -- common/autotest_common.sh@10 -- # set +x 00:17:47.744 Malloc0 00:17:47.744 14:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.744 14:54:30 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:47.744 14:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.744 14:54:30 -- common/autotest_common.sh@10 -- # set +x 00:17:47.744 14:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.744 14:54:30 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:47.744 14:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.744 14:54:30 -- common/autotest_common.sh@10 -- # set +x 00:17:47.744 14:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.744 14:54:30 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.744 14:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.744 14:54:30 -- common/autotest_common.sh@10 -- # set +x 00:17:47.744 [2024-04-26 14:54:30.144007] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.744 14:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.744 14:54:30 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:47.744 14:54:30 -- nvmf/common.sh@521 -- # config=() 00:17:47.744 14:54:30 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:47.744 14:54:30 -- nvmf/common.sh@521 -- # local subsystem config 00:17:47.744 14:54:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:47.744 14:54:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:47.744 { 00:17:47.744 "params": { 00:17:47.744 "name": "Nvme$subsystem", 00:17:47.744 "trtype": "$TEST_TRANSPORT", 00:17:47.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:47.744 "adrfam": "ipv4", 00:17:47.744 "trsvcid": "$NVMF_PORT", 00:17:47.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:47.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:47.744 "hdgst": ${hdgst:-false}, 00:17:47.744 "ddgst": ${ddgst:-false} 00:17:47.744 }, 00:17:47.744 "method": "bdev_nvme_attach_controller" 00:17:47.744 } 00:17:47.744 EOF 00:17:47.744 )") 00:17:47.744 14:54:30 -- nvmf/common.sh@543 -- # cat 00:17:47.744 14:54:30 -- nvmf/common.sh@545 -- # jq . 00:17:47.744 14:54:30 -- nvmf/common.sh@546 -- # IFS=, 00:17:47.744 14:54:30 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:47.744 "params": { 00:17:47.744 "name": "Nvme1", 00:17:47.744 "trtype": "tcp", 00:17:47.744 "traddr": "10.0.0.2", 00:17:47.744 "adrfam": "ipv4", 00:17:47.744 "trsvcid": "4420", 00:17:47.745 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.745 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:47.745 "hdgst": false, 00:17:47.745 "ddgst": false 00:17:47.745 }, 00:17:47.745 "method": "bdev_nvme_attach_controller" 00:17:47.745 }' 00:17:47.745 [2024-04-26 14:54:30.206458] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:17:47.745 [2024-04-26 14:54:30.206571] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1074794 ] 00:17:47.745 [2024-04-26 14:54:30.278108] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:47.745 [2024-04-26 14:54:30.372335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.745 [2024-04-26 14:54:30.372450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.745 [2024-04-26 14:54:30.372454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.006 I/O targets: 00:17:48.006 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:48.006 00:17:48.006 00:17:48.006 CUnit - A unit testing framework for C - Version 2.1-3 00:17:48.006 http://cunit.sourceforge.net/ 00:17:48.006 00:17:48.006 00:17:48.006 Suite: bdevio tests on: Nvme1n1 00:17:48.006 Test: blockdev write read block ...passed 00:17:48.006 Test: blockdev write zeroes read block ...passed 00:17:48.006 Test: blockdev write zeroes read no split ...passed 00:17:48.267 Test: blockdev write zeroes read split ...passed 00:17:48.267 Test: blockdev write zeroes read split partial ...passed 00:17:48.267 Test: blockdev reset ...[2024-04-26 14:54:30.743191] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:48.267 [2024-04-26 14:54:30.743259] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75bfa0 (9): Bad file descriptor 00:17:48.267 [2024-04-26 14:54:30.879012] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:48.267 passed 00:17:48.267 Test: blockdev write read 8 blocks ...passed 00:17:48.267 Test: blockdev write read size > 128k ...passed 00:17:48.267 Test: blockdev write read invalid size ...passed 00:17:48.528 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:48.528 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:48.528 Test: blockdev write read max offset ...passed 00:17:48.528 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:48.528 Test: blockdev writev readv 8 blocks ...passed 00:17:48.528 Test: blockdev writev readv 30 x 1block ...passed 00:17:48.528 Test: blockdev writev readv block ...passed 00:17:48.528 Test: blockdev writev readv size > 128k ...passed 00:17:48.528 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:48.528 Test: blockdev comparev and writev ...[2024-04-26 14:54:31.100750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:48.528 [2024-04-26 14:54:31.100774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.528 [2024-04-26 14:54:31.100786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:48.528 [2024-04-26 14:54:31.100792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:48.528 [2024-04-26 14:54:31.101172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:48.528 [2024-04-26 14:54:31.101180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:48.528 [2024-04-26 14:54:31.101190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:48.528 [2024-04-26 14:54:31.101196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:48.528 [2024-04-26 14:54:31.101584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:48.528 [2024-04-26 14:54:31.101592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:48.528 [2024-04-26 14:54:31.101601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:48.528 [2024-04-26 14:54:31.101607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:48.528 [2024-04-26 14:54:31.101930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:48.528 [2024-04-26 14:54:31.101938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:48.528 [2024-04-26 14:54:31.101948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:48.528 [2024-04-26 14:54:31.101953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:48.528 passed 00:17:48.528 Test: blockdev nvme passthru rw ...passed 00:17:48.528 Test: blockdev nvme passthru vendor specific ...[2024-04-26 14:54:31.186428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:48.528 [2024-04-26 14:54:31.186439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:48.528 [2024-04-26 14:54:31.186668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:48.528 [2024-04-26 14:54:31.186676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:48.528 [2024-04-26 14:54:31.186942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:48.528 [2024-04-26 14:54:31.186949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:48.528 [2024-04-26 14:54:31.187206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:48.528 [2024-04-26 14:54:31.187213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:48.528 passed 00:17:48.790 Test: blockdev nvme admin passthru ...passed 00:17:48.790 Test: blockdev copy ...passed 00:17:48.790 00:17:48.790 Run Summary: Type Total Ran Passed Failed Inactive 00:17:48.790 suites 1 1 n/a 0 0 00:17:48.790 tests 23 23 23 0 0 00:17:48.790 asserts 152 152 152 0 n/a 00:17:48.790 00:17:48.790 Elapsed time = 1.464 seconds 00:17:49.051 14:54:31 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:49.051 14:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.051 14:54:31 -- common/autotest_common.sh@10 -- # set +x 00:17:49.051 14:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.051 14:54:31 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:49.051 14:54:31 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:49.051 14:54:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:49.051 14:54:31 -- nvmf/common.sh@117 -- # sync 00:17:49.051 14:54:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:49.051 14:54:31 -- nvmf/common.sh@120 -- # set +e 00:17:49.051 14:54:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:49.051 14:54:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:49.051 rmmod nvme_tcp 00:17:49.051 rmmod nvme_fabrics 00:17:49.051 rmmod nvme_keyring 00:17:49.051 14:54:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:49.051 14:54:31 -- nvmf/common.sh@124 -- # set -e 00:17:49.051 14:54:31 -- nvmf/common.sh@125 -- # return 0 00:17:49.051 14:54:31 -- nvmf/common.sh@478 -- # '[' -n 1074484 ']' 00:17:49.051 14:54:31 -- nvmf/common.sh@479 -- # killprocess 1074484 00:17:49.051 14:54:31 -- common/autotest_common.sh@936 -- # '[' -z 1074484 ']' 00:17:49.051 14:54:31 -- common/autotest_common.sh@940 -- # kill -0 1074484 00:17:49.051 14:54:31 -- common/autotest_common.sh@941 -- # uname 00:17:49.051 14:54:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:49.051 14:54:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1074484 00:17:49.051 14:54:31 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:49.051 14:54:31 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:49.051 14:54:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1074484' 00:17:49.051 killing process with pid 1074484 00:17:49.051 14:54:31 -- common/autotest_common.sh@955 -- # kill 1074484 00:17:49.051 14:54:31 -- common/autotest_common.sh@960 -- # wait 1074484 00:17:49.313 14:54:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:49.313 14:54:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:49.313 14:54:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:49.313 14:54:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:49.313 14:54:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:49.313 14:54:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.313 14:54:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:49.313 14:54:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.873 14:54:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:51.873 00:17:51.873 real 0m11.957s 00:17:51.873 user 0m14.004s 00:17:51.873 sys 0m6.113s 00:17:51.873 14:54:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:51.873 14:54:33 -- common/autotest_common.sh@10 -- # set +x 00:17:51.873 ************************************ 00:17:51.873 END TEST nvmf_bdevio_no_huge 00:17:51.873 ************************************ 00:17:51.873 14:54:34 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:51.873 14:54:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:51.873 14:54:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:51.873 14:54:34 -- common/autotest_common.sh@10 -- # set +x 00:17:51.873 ************************************ 00:17:51.873 START TEST nvmf_tls 00:17:51.873 ************************************ 00:17:51.873 14:54:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:51.873 * Looking for test storage... 00:17:51.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:51.873 14:54:34 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:51.873 14:54:34 -- nvmf/common.sh@7 -- # uname -s 00:17:51.873 14:54:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.873 14:54:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.873 14:54:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.873 14:54:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.873 14:54:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.873 14:54:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.873 14:54:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.873 14:54:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.873 14:54:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.873 14:54:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.873 14:54:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:51.873 14:54:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:51.873 14:54:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.873 14:54:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.873 14:54:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:51.873 14:54:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.873 14:54:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:51.873 14:54:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.873 14:54:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.873 14:54:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.873 14:54:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.873 14:54:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.873 14:54:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.873 14:54:34 -- paths/export.sh@5 -- # export PATH 00:17:51.873 14:54:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.873 14:54:34 -- nvmf/common.sh@47 -- # : 0 00:17:51.873 14:54:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:51.873 14:54:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:51.873 14:54:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.873 14:54:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.873 14:54:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.873 14:54:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:51.873 14:54:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:51.873 14:54:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:51.873 14:54:34 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:51.873 14:54:34 -- target/tls.sh@62 -- # nvmftestinit 00:17:51.873 14:54:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:51.873 14:54:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:51.873 14:54:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:51.873 14:54:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:51.873 14:54:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:51.873 14:54:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.873 14:54:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.873 14:54:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.873 14:54:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:51.873 14:54:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:51.873 14:54:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:51.874 14:54:34 -- common/autotest_common.sh@10 -- # set +x 00:18:00.086 14:54:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:00.086 14:54:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:00.086 14:54:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:00.086 14:54:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:00.086 14:54:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:00.086 14:54:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:00.086 14:54:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:00.086 14:54:41 -- nvmf/common.sh@295 -- # net_devs=() 00:18:00.086 14:54:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:00.086 14:54:41 -- nvmf/common.sh@296 -- # e810=() 00:18:00.086 14:54:41 -- nvmf/common.sh@296 -- # local -ga e810 00:18:00.086 14:54:41 -- nvmf/common.sh@297 -- # x722=() 00:18:00.086 14:54:41 -- nvmf/common.sh@297 -- # local -ga x722 00:18:00.086 14:54:41 -- nvmf/common.sh@298 -- # mlx=() 00:18:00.086 14:54:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:00.086 14:54:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:00.086 14:54:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:00.086 14:54:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:00.086 14:54:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:00.086 14:54:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:00.086 14:54:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:00.086 14:54:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:00.086 14:54:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:00.086 14:54:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:00.086 14:54:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:00.086 14:54:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:00.086 14:54:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:00.086 14:54:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:00.086 14:54:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:00.086 14:54:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:00.086 14:54:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:00.086 14:54:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:00.086 14:54:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:00.086 14:54:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:00.086 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:00.086 14:54:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:00.086 14:54:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:00.086 14:54:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.086 14:54:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.086 14:54:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:00.086 14:54:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:00.086 14:54:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:00.086 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:00.086 14:54:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:00.086 14:54:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:00.086 14:54:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.086 14:54:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.086 14:54:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:00.086 14:54:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:00.086 14:54:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:00.086 14:54:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:00.086 14:54:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:00.086 14:54:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.086 14:54:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:00.086 14:54:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.086 14:54:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:00.086 Found net devices under 0000:31:00.0: cvl_0_0 00:18:00.086 14:54:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.086 14:54:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:00.086 14:54:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.086 14:54:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:00.086 14:54:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.086 14:54:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:00.086 Found net devices under 0000:31:00.1: cvl_0_1 00:18:00.086 14:54:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.086 14:54:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:00.086 14:54:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:00.086 14:54:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:00.086 14:54:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:00.086 14:54:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:00.086 14:54:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.086 14:54:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:00.086 14:54:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:00.086 14:54:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:00.086 14:54:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:00.086 14:54:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:00.086 14:54:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:00.086 14:54:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:00.086 14:54:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.086 14:54:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:00.086 14:54:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:00.086 14:54:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:00.086 14:54:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:00.086 14:54:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:00.086 14:54:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:00.086 14:54:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:00.086 14:54:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:00.086 14:54:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:00.086 14:54:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:00.086 14:54:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:00.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:18:00.086 00:18:00.086 --- 10.0.0.2 ping statistics --- 00:18:00.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.086 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:18:00.086 14:54:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:00.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:18:00.086 00:18:00.086 --- 10.0.0.1 ping statistics --- 00:18:00.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.086 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:18:00.086 14:54:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.086 14:54:41 -- nvmf/common.sh@411 -- # return 0 00:18:00.086 14:54:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:00.086 14:54:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.086 14:54:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:00.086 14:54:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:00.086 14:54:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.086 14:54:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:00.086 14:54:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:00.086 14:54:41 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:00.086 14:54:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:00.086 14:54:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:00.086 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:18:00.086 14:54:41 -- nvmf/common.sh@470 -- # nvmfpid=1079265 00:18:00.086 14:54:41 -- nvmf/common.sh@471 -- # waitforlisten 1079265 00:18:00.086 14:54:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:00.086 14:54:41 -- common/autotest_common.sh@817 -- # '[' -z 1079265 ']' 00:18:00.087 14:54:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.087 14:54:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:00.087 14:54:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.087 14:54:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:00.087 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:18:00.087 [2024-04-26 14:54:41.678271] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:00.087 [2024-04-26 14:54:41.678342] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.087 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.087 [2024-04-26 14:54:41.767867] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.087 [2024-04-26 14:54:41.860698] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.087 [2024-04-26 14:54:41.860755] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.087 [2024-04-26 14:54:41.860764] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.087 [2024-04-26 14:54:41.860777] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.087 [2024-04-26 14:54:41.860783] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.087 [2024-04-26 14:54:41.860807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.087 14:54:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:00.087 14:54:42 -- common/autotest_common.sh@850 -- # return 0 00:18:00.087 14:54:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:00.087 14:54:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:00.087 14:54:42 -- common/autotest_common.sh@10 -- # set +x 00:18:00.087 14:54:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.087 14:54:42 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:00.087 14:54:42 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:00.087 true 00:18:00.087 14:54:42 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:00.087 14:54:42 -- target/tls.sh@73 -- # jq -r .tls_version 00:18:00.347 14:54:42 -- target/tls.sh@73 -- # version=0 00:18:00.347 14:54:42 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:00.347 14:54:42 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:00.608 14:54:43 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:00.608 14:54:43 -- target/tls.sh@81 -- # jq -r .tls_version 00:18:00.608 14:54:43 -- target/tls.sh@81 -- # version=13 00:18:00.608 14:54:43 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:00.608 14:54:43 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:00.868 14:54:43 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:00.868 14:54:43 -- target/tls.sh@89 -- # jq -r .tls_version 00:18:00.868 14:54:43 -- target/tls.sh@89 -- # version=7 00:18:00.868 14:54:43 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:00.868 14:54:43 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:00.868 14:54:43 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:01.129 14:54:43 -- target/tls.sh@96 -- # ktls=false 00:18:01.129 14:54:43 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:01.129 14:54:43 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:01.390 14:54:43 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:01.390 14:54:43 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:01.390 14:54:43 -- target/tls.sh@104 -- # ktls=true 00:18:01.390 14:54:43 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:01.390 14:54:43 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:01.650 14:54:44 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:01.650 14:54:44 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:01.912 14:54:44 -- target/tls.sh@112 -- # ktls=false 00:18:01.912 14:54:44 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:01.912 14:54:44 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:01.912 14:54:44 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:01.912 14:54:44 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:01.912 14:54:44 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:18:01.912 14:54:44 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:18:01.912 14:54:44 -- nvmf/common.sh@693 -- # digest=1 00:18:01.912 14:54:44 -- nvmf/common.sh@694 -- # python - 00:18:01.912 14:54:44 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:01.912 14:54:44 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:01.912 14:54:44 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:01.912 14:54:44 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:01.912 14:54:44 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:18:01.912 14:54:44 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:18:01.912 14:54:44 -- nvmf/common.sh@693 -- # digest=1 00:18:01.912 14:54:44 -- nvmf/common.sh@694 -- # python - 00:18:01.912 14:54:44 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:01.912 14:54:44 -- target/tls.sh@121 -- # mktemp 00:18:01.912 14:54:44 -- target/tls.sh@121 -- # key_path=/tmp/tmp.218joURRRe 00:18:01.912 14:54:44 -- target/tls.sh@122 -- # mktemp 00:18:01.912 14:54:44 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.Jjjmc5kuAq 00:18:01.912 14:54:44 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:01.912 14:54:44 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:01.912 14:54:44 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.218joURRRe 00:18:01.912 14:54:44 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Jjjmc5kuAq 00:18:01.912 14:54:44 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:02.172 14:54:44 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:02.433 14:54:44 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.218joURRRe 00:18:02.433 14:54:44 -- target/tls.sh@49 -- # local key=/tmp/tmp.218joURRRe 00:18:02.433 14:54:44 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:02.433 [2024-04-26 14:54:44.997980] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.433 14:54:45 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:02.694 14:54:45 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:02.694 [2024-04-26 14:54:45.302682] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:02.694 [2024-04-26 14:54:45.302879] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.694 14:54:45 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:02.954 malloc0 00:18:02.954 14:54:45 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:03.214 14:54:45 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.218joURRRe 00:18:03.214 [2024-04-26 14:54:45.773770] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:03.214 14:54:45 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.218joURRRe 00:18:03.214 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.215 Initializing NVMe Controllers 00:18:13.215 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:13.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:13.215 Initialization complete. Launching workers. 00:18:13.215 ======================================================== 00:18:13.215 Latency(us) 00:18:13.215 Device Information : IOPS MiB/s Average min max 00:18:13.215 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18768.86 73.32 3409.90 1207.51 4189.10 00:18:13.215 ======================================================== 00:18:13.215 Total : 18768.86 73.32 3409.90 1207.51 4189.10 00:18:13.215 00:18:13.215 14:54:55 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.218joURRRe 00:18:13.215 14:54:55 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:13.215 14:54:55 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:13.215 14:54:55 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:13.215 14:54:55 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.218joURRRe' 00:18:13.215 14:54:55 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.215 14:54:55 -- target/tls.sh@28 -- # bdevperf_pid=1082191 00:18:13.475 14:54:55 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:13.475 14:54:55 -- target/tls.sh@31 -- # waitforlisten 1082191 /var/tmp/bdevperf.sock 00:18:13.475 14:54:55 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:13.475 14:54:55 -- common/autotest_common.sh@817 -- # '[' -z 1082191 ']' 00:18:13.475 14:54:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.475 14:54:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:13.475 14:54:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.475 14:54:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:13.475 14:54:55 -- common/autotest_common.sh@10 -- # set +x 00:18:13.475 [2024-04-26 14:54:55.924344] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:13.475 [2024-04-26 14:54:55.924397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1082191 ] 00:18:13.475 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.475 [2024-04-26 14:54:55.975098] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.475 [2024-04-26 14:54:56.025979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.045 14:54:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:14.045 14:54:56 -- common/autotest_common.sh@850 -- # return 0 00:18:14.045 14:54:56 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.218joURRRe 00:18:14.306 [2024-04-26 14:54:56.830905] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:14.306 [2024-04-26 14:54:56.830977] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:14.306 TLSTESTn1 00:18:14.306 14:54:56 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:14.566 Running I/O for 10 seconds... 00:18:24.563 00:18:24.563 Latency(us) 00:18:24.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.563 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:24.563 Verification LBA range: start 0x0 length 0x2000 00:18:24.563 TLSTESTn1 : 10.01 6104.01 23.84 0.00 0.00 20939.47 5761.71 31457.28 00:18:24.563 =================================================================================================================== 00:18:24.563 Total : 6104.01 23.84 0.00 0.00 20939.47 5761.71 31457.28 00:18:24.563 0 00:18:24.563 14:55:07 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:24.563 14:55:07 -- target/tls.sh@45 -- # killprocess 1082191 00:18:24.563 14:55:07 -- common/autotest_common.sh@936 -- # '[' -z 1082191 ']' 00:18:24.563 14:55:07 -- common/autotest_common.sh@940 -- # kill -0 1082191 00:18:24.563 14:55:07 -- common/autotest_common.sh@941 -- # uname 00:18:24.563 14:55:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:24.563 14:55:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1082191 00:18:24.563 14:55:07 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:24.563 14:55:07 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:24.563 14:55:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1082191' 00:18:24.563 killing process with pid 1082191 00:18:24.563 14:55:07 -- common/autotest_common.sh@955 -- # kill 1082191 00:18:24.563 Received shutdown signal, test time was about 10.000000 seconds 00:18:24.563 00:18:24.563 Latency(us) 00:18:24.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.563 =================================================================================================================== 00:18:24.563 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:24.563 [2024-04-26 14:55:07.128788] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:24.563 14:55:07 -- common/autotest_common.sh@960 -- # wait 1082191 00:18:24.823 14:55:07 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Jjjmc5kuAq 00:18:24.823 14:55:07 -- common/autotest_common.sh@638 -- # local es=0 00:18:24.823 14:55:07 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Jjjmc5kuAq 00:18:24.823 14:55:07 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:24.823 14:55:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:24.823 14:55:07 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:24.823 14:55:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:24.823 14:55:07 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Jjjmc5kuAq 00:18:24.823 14:55:07 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:24.823 14:55:07 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:24.823 14:55:07 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:24.823 14:55:07 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Jjjmc5kuAq' 00:18:24.823 14:55:07 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:24.823 14:55:07 -- target/tls.sh@28 -- # bdevperf_pid=1084284 00:18:24.823 14:55:07 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:24.823 14:55:07 -- target/tls.sh@31 -- # waitforlisten 1084284 /var/tmp/bdevperf.sock 00:18:24.823 14:55:07 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:24.823 14:55:07 -- common/autotest_common.sh@817 -- # '[' -z 1084284 ']' 00:18:24.823 14:55:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:24.823 14:55:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:24.823 14:55:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:24.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:24.823 14:55:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:24.823 14:55:07 -- common/autotest_common.sh@10 -- # set +x 00:18:24.823 [2024-04-26 14:55:07.293550] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:24.823 [2024-04-26 14:55:07.293602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1084284 ] 00:18:24.823 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.823 [2024-04-26 14:55:07.344330] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.823 [2024-04-26 14:55:07.393520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:25.513 14:55:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:25.513 14:55:08 -- common/autotest_common.sh@850 -- # return 0 00:18:25.513 14:55:08 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Jjjmc5kuAq 00:18:25.774 [2024-04-26 14:55:08.210617] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:25.775 [2024-04-26 14:55:08.210681] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:25.775 [2024-04-26 14:55:08.220728] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:25.775 [2024-04-26 14:55:08.221620] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c53bf0 (107): Transport endpoint is not connected 00:18:25.775 [2024-04-26 14:55:08.222615] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c53bf0 (9): Bad file descriptor 00:18:25.775 [2024-04-26 14:55:08.223616] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:25.775 [2024-04-26 14:55:08.223622] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:25.775 [2024-04-26 14:55:08.223628] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:25.775 request: 00:18:25.775 { 00:18:25.775 "name": "TLSTEST", 00:18:25.775 "trtype": "tcp", 00:18:25.775 "traddr": "10.0.0.2", 00:18:25.775 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:25.775 "adrfam": "ipv4", 00:18:25.775 "trsvcid": "4420", 00:18:25.775 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.775 "psk": "/tmp/tmp.Jjjmc5kuAq", 00:18:25.775 "method": "bdev_nvme_attach_controller", 00:18:25.775 "req_id": 1 00:18:25.775 } 00:18:25.775 Got JSON-RPC error response 00:18:25.775 response: 00:18:25.775 { 00:18:25.775 "code": -32602, 00:18:25.775 "message": "Invalid parameters" 00:18:25.775 } 00:18:25.775 14:55:08 -- target/tls.sh@36 -- # killprocess 1084284 00:18:25.775 14:55:08 -- common/autotest_common.sh@936 -- # '[' -z 1084284 ']' 00:18:25.775 14:55:08 -- common/autotest_common.sh@940 -- # kill -0 1084284 00:18:25.775 14:55:08 -- common/autotest_common.sh@941 -- # uname 00:18:25.775 14:55:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:25.775 14:55:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1084284 00:18:25.775 14:55:08 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:25.775 14:55:08 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:25.775 14:55:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1084284' 00:18:25.775 killing process with pid 1084284 00:18:25.775 14:55:08 -- common/autotest_common.sh@955 -- # kill 1084284 00:18:25.775 Received shutdown signal, test time was about 10.000000 seconds 00:18:25.775 00:18:25.775 Latency(us) 00:18:25.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.775 =================================================================================================================== 00:18:25.775 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:25.775 [2024-04-26 14:55:08.309947] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:25.775 14:55:08 -- common/autotest_common.sh@960 -- # wait 1084284 00:18:25.775 14:55:08 -- target/tls.sh@37 -- # return 1 00:18:25.775 14:55:08 -- common/autotest_common.sh@641 -- # es=1 00:18:25.775 14:55:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:25.775 14:55:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:25.775 14:55:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:25.775 14:55:08 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.218joURRRe 00:18:25.775 14:55:08 -- common/autotest_common.sh@638 -- # local es=0 00:18:25.775 14:55:08 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.218joURRRe 00:18:25.775 14:55:08 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:25.775 14:55:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:25.775 14:55:08 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:25.775 14:55:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:25.775 14:55:08 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.218joURRRe 00:18:25.775 14:55:08 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:25.775 14:55:08 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:25.775 14:55:08 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:25.775 14:55:08 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.218joURRRe' 00:18:25.775 14:55:08 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:25.775 14:55:08 -- target/tls.sh@28 -- # bdevperf_pid=1084621 00:18:25.775 14:55:08 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:25.775 14:55:08 -- target/tls.sh@31 -- # waitforlisten 1084621 /var/tmp/bdevperf.sock 00:18:25.775 14:55:08 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:25.775 14:55:08 -- common/autotest_common.sh@817 -- # '[' -z 1084621 ']' 00:18:25.775 14:55:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:25.775 14:55:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:25.775 14:55:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:25.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:25.775 14:55:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:25.775 14:55:08 -- common/autotest_common.sh@10 -- # set +x 00:18:26.035 [2024-04-26 14:55:08.465096] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:26.035 [2024-04-26 14:55:08.465150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1084621 ] 00:18:26.035 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.035 [2024-04-26 14:55:08.515930] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.035 [2024-04-26 14:55:08.564625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:26.606 14:55:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:26.606 14:55:09 -- common/autotest_common.sh@850 -- # return 0 00:18:26.606 14:55:09 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.218joURRRe 00:18:26.867 [2024-04-26 14:55:09.381609] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:26.867 [2024-04-26 14:55:09.381673] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:26.867 [2024-04-26 14:55:09.385883] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:26.867 [2024-04-26 14:55:09.385902] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:26.867 [2024-04-26 14:55:09.385923] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:26.867 [2024-04-26 14:55:09.386568] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228fbf0 (107): Transport endpoint is not connected 00:18:26.867 [2024-04-26 14:55:09.387562] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228fbf0 (9): Bad file descriptor 00:18:26.867 [2024-04-26 14:55:09.388563] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:26.867 [2024-04-26 14:55:09.388569] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:26.867 [2024-04-26 14:55:09.388575] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:26.867 request: 00:18:26.867 { 00:18:26.867 "name": "TLSTEST", 00:18:26.867 "trtype": "tcp", 00:18:26.867 "traddr": "10.0.0.2", 00:18:26.867 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:26.867 "adrfam": "ipv4", 00:18:26.867 "trsvcid": "4420", 00:18:26.867 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.867 "psk": "/tmp/tmp.218joURRRe", 00:18:26.867 "method": "bdev_nvme_attach_controller", 00:18:26.867 "req_id": 1 00:18:26.867 } 00:18:26.867 Got JSON-RPC error response 00:18:26.867 response: 00:18:26.867 { 00:18:26.867 "code": -32602, 00:18:26.867 "message": "Invalid parameters" 00:18:26.867 } 00:18:26.867 14:55:09 -- target/tls.sh@36 -- # killprocess 1084621 00:18:26.867 14:55:09 -- common/autotest_common.sh@936 -- # '[' -z 1084621 ']' 00:18:26.867 14:55:09 -- common/autotest_common.sh@940 -- # kill -0 1084621 00:18:26.867 14:55:09 -- common/autotest_common.sh@941 -- # uname 00:18:26.867 14:55:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:26.867 14:55:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1084621 00:18:26.867 14:55:09 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:26.867 14:55:09 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:26.867 14:55:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1084621' 00:18:26.867 killing process with pid 1084621 00:18:26.867 14:55:09 -- common/autotest_common.sh@955 -- # kill 1084621 00:18:26.867 Received shutdown signal, test time was about 10.000000 seconds 00:18:26.867 00:18:26.867 Latency(us) 00:18:26.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.867 =================================================================================================================== 00:18:26.867 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:26.867 [2024-04-26 14:55:09.475167] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:26.867 14:55:09 -- common/autotest_common.sh@960 -- # wait 1084621 00:18:27.128 14:55:09 -- target/tls.sh@37 -- # return 1 00:18:27.128 14:55:09 -- common/autotest_common.sh@641 -- # es=1 00:18:27.128 14:55:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:27.128 14:55:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:27.128 14:55:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:27.128 14:55:09 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.218joURRRe 00:18:27.128 14:55:09 -- common/autotest_common.sh@638 -- # local es=0 00:18:27.128 14:55:09 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.218joURRRe 00:18:27.128 14:55:09 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:27.128 14:55:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:27.128 14:55:09 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:27.128 14:55:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:27.128 14:55:09 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.218joURRRe 00:18:27.128 14:55:09 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:27.128 14:55:09 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:27.128 14:55:09 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:27.128 14:55:09 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.218joURRRe' 00:18:27.128 14:55:09 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:27.128 14:55:09 -- target/tls.sh@28 -- # bdevperf_pid=1084843 00:18:27.128 14:55:09 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:27.128 14:55:09 -- target/tls.sh@31 -- # waitforlisten 1084843 /var/tmp/bdevperf.sock 00:18:27.128 14:55:09 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:27.128 14:55:09 -- common/autotest_common.sh@817 -- # '[' -z 1084843 ']' 00:18:27.128 14:55:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.128 14:55:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:27.128 14:55:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.128 14:55:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:27.128 14:55:09 -- common/autotest_common.sh@10 -- # set +x 00:18:27.128 [2024-04-26 14:55:09.638456] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:27.128 [2024-04-26 14:55:09.638516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1084843 ] 00:18:27.128 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.128 [2024-04-26 14:55:09.690418] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.128 [2024-04-26 14:55:09.740585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.070 14:55:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:28.070 14:55:10 -- common/autotest_common.sh@850 -- # return 0 00:18:28.070 14:55:10 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.218joURRRe 00:18:28.070 [2024-04-26 14:55:10.549795] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:28.070 [2024-04-26 14:55:10.549860] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:28.070 [2024-04-26 14:55:10.556337] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:28.070 [2024-04-26 14:55:10.556354] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:28.070 [2024-04-26 14:55:10.556374] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:28.070 [2024-04-26 14:55:10.556805] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x985bf0 (107): Transport endpoint is not connected 00:18:28.070 [2024-04-26 14:55:10.557802] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x985bf0 (9): Bad file descriptor 00:18:28.070 [2024-04-26 14:55:10.558803] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:28.070 [2024-04-26 14:55:10.558810] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:28.070 [2024-04-26 14:55:10.558815] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:28.070 request: 00:18:28.070 { 00:18:28.070 "name": "TLSTEST", 00:18:28.070 "trtype": "tcp", 00:18:28.070 "traddr": "10.0.0.2", 00:18:28.070 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:28.070 "adrfam": "ipv4", 00:18:28.070 "trsvcid": "4420", 00:18:28.070 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:28.070 "psk": "/tmp/tmp.218joURRRe", 00:18:28.070 "method": "bdev_nvme_attach_controller", 00:18:28.070 "req_id": 1 00:18:28.070 } 00:18:28.070 Got JSON-RPC error response 00:18:28.070 response: 00:18:28.070 { 00:18:28.070 "code": -32602, 00:18:28.070 "message": "Invalid parameters" 00:18:28.070 } 00:18:28.070 14:55:10 -- target/tls.sh@36 -- # killprocess 1084843 00:18:28.070 14:55:10 -- common/autotest_common.sh@936 -- # '[' -z 1084843 ']' 00:18:28.070 14:55:10 -- common/autotest_common.sh@940 -- # kill -0 1084843 00:18:28.070 14:55:10 -- common/autotest_common.sh@941 -- # uname 00:18:28.070 14:55:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:28.070 14:55:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1084843 00:18:28.070 14:55:10 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:28.070 14:55:10 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:28.070 14:55:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1084843' 00:18:28.070 killing process with pid 1084843 00:18:28.070 14:55:10 -- common/autotest_common.sh@955 -- # kill 1084843 00:18:28.070 Received shutdown signal, test time was about 10.000000 seconds 00:18:28.070 00:18:28.070 Latency(us) 00:18:28.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.070 =================================================================================================================== 00:18:28.070 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:28.070 [2024-04-26 14:55:10.644782] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:28.070 14:55:10 -- common/autotest_common.sh@960 -- # wait 1084843 00:18:28.331 14:55:10 -- target/tls.sh@37 -- # return 1 00:18:28.331 14:55:10 -- common/autotest_common.sh@641 -- # es=1 00:18:28.331 14:55:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:28.331 14:55:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:28.331 14:55:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:28.331 14:55:10 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:28.331 14:55:10 -- common/autotest_common.sh@638 -- # local es=0 00:18:28.331 14:55:10 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:28.331 14:55:10 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:28.331 14:55:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:28.331 14:55:10 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:28.331 14:55:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:28.331 14:55:10 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:28.331 14:55:10 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:28.331 14:55:10 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:28.331 14:55:10 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:28.331 14:55:10 -- target/tls.sh@23 -- # psk= 00:18:28.331 14:55:10 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:28.331 14:55:10 -- target/tls.sh@28 -- # bdevperf_pid=1084981 00:18:28.331 14:55:10 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:28.331 14:55:10 -- target/tls.sh@31 -- # waitforlisten 1084981 /var/tmp/bdevperf.sock 00:18:28.331 14:55:10 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:28.331 14:55:10 -- common/autotest_common.sh@817 -- # '[' -z 1084981 ']' 00:18:28.331 14:55:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:28.331 14:55:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:28.331 14:55:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:28.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:28.331 14:55:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:28.331 14:55:10 -- common/autotest_common.sh@10 -- # set +x 00:18:28.331 [2024-04-26 14:55:10.809691] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:28.331 [2024-04-26 14:55:10.809793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1084981 ] 00:18:28.331 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.331 [2024-04-26 14:55:10.864163] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.331 [2024-04-26 14:55:10.915101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.901 14:55:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:28.901 14:55:11 -- common/autotest_common.sh@850 -- # return 0 00:18:28.901 14:55:11 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:29.162 [2024-04-26 14:55:11.690256] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:29.162 [2024-04-26 14:55:11.692218] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15df590 (9): Bad file descriptor 00:18:29.162 [2024-04-26 14:55:11.693218] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:29.162 [2024-04-26 14:55:11.693227] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:29.162 [2024-04-26 14:55:11.693232] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:29.162 request: 00:18:29.162 { 00:18:29.162 "name": "TLSTEST", 00:18:29.162 "trtype": "tcp", 00:18:29.162 "traddr": "10.0.0.2", 00:18:29.162 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:29.162 "adrfam": "ipv4", 00:18:29.162 "trsvcid": "4420", 00:18:29.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.162 "method": "bdev_nvme_attach_controller", 00:18:29.162 "req_id": 1 00:18:29.162 } 00:18:29.162 Got JSON-RPC error response 00:18:29.162 response: 00:18:29.162 { 00:18:29.162 "code": -32602, 00:18:29.162 "message": "Invalid parameters" 00:18:29.162 } 00:18:29.162 14:55:11 -- target/tls.sh@36 -- # killprocess 1084981 00:18:29.162 14:55:11 -- common/autotest_common.sh@936 -- # '[' -z 1084981 ']' 00:18:29.162 14:55:11 -- common/autotest_common.sh@940 -- # kill -0 1084981 00:18:29.162 14:55:11 -- common/autotest_common.sh@941 -- # uname 00:18:29.162 14:55:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:29.162 14:55:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1084981 00:18:29.162 14:55:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:29.162 14:55:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:29.162 14:55:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1084981' 00:18:29.162 killing process with pid 1084981 00:18:29.162 14:55:11 -- common/autotest_common.sh@955 -- # kill 1084981 00:18:29.162 Received shutdown signal, test time was about 10.000000 seconds 00:18:29.162 00:18:29.162 Latency(us) 00:18:29.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.162 =================================================================================================================== 00:18:29.162 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:29.162 14:55:11 -- common/autotest_common.sh@960 -- # wait 1084981 00:18:29.421 14:55:11 -- target/tls.sh@37 -- # return 1 00:18:29.421 14:55:11 -- common/autotest_common.sh@641 -- # es=1 00:18:29.421 14:55:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:29.421 14:55:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:29.421 14:55:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:29.421 14:55:11 -- target/tls.sh@158 -- # killprocess 1079265 00:18:29.421 14:55:11 -- common/autotest_common.sh@936 -- # '[' -z 1079265 ']' 00:18:29.421 14:55:11 -- common/autotest_common.sh@940 -- # kill -0 1079265 00:18:29.421 14:55:11 -- common/autotest_common.sh@941 -- # uname 00:18:29.421 14:55:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:29.421 14:55:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1079265 00:18:29.421 14:55:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:29.421 14:55:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:29.421 14:55:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1079265' 00:18:29.421 killing process with pid 1079265 00:18:29.421 14:55:11 -- common/autotest_common.sh@955 -- # kill 1079265 00:18:29.421 [2024-04-26 14:55:11.924736] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:29.421 14:55:11 -- common/autotest_common.sh@960 -- # wait 1079265 00:18:29.421 14:55:12 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:29.421 14:55:12 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:29.421 14:55:12 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:29.421 14:55:12 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:18:29.421 14:55:12 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:29.421 14:55:12 -- nvmf/common.sh@693 -- # digest=2 00:18:29.421 14:55:12 -- nvmf/common.sh@694 -- # python - 00:18:29.421 14:55:12 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:29.681 14:55:12 -- target/tls.sh@160 -- # mktemp 00:18:29.681 14:55:12 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.JCZme9uQoS 00:18:29.681 14:55:12 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:29.681 14:55:12 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.JCZme9uQoS 00:18:29.681 14:55:12 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:29.681 14:55:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:29.681 14:55:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:29.681 14:55:12 -- common/autotest_common.sh@10 -- # set +x 00:18:29.681 14:55:12 -- nvmf/common.sh@470 -- # nvmfpid=1085327 00:18:29.681 14:55:12 -- nvmf/common.sh@471 -- # waitforlisten 1085327 00:18:29.681 14:55:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:29.681 14:55:12 -- common/autotest_common.sh@817 -- # '[' -z 1085327 ']' 00:18:29.681 14:55:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.681 14:55:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:29.681 14:55:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.681 14:55:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:29.681 14:55:12 -- common/autotest_common.sh@10 -- # set +x 00:18:29.681 [2024-04-26 14:55:12.164804] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:29.681 [2024-04-26 14:55:12.164884] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:29.681 EAL: No free 2048 kB hugepages reported on node 1 00:18:29.681 [2024-04-26 14:55:12.249121] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.681 [2024-04-26 14:55:12.301805] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:29.681 [2024-04-26 14:55:12.301843] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:29.681 [2024-04-26 14:55:12.301848] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:29.681 [2024-04-26 14:55:12.301853] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:29.681 [2024-04-26 14:55:12.301857] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:29.681 [2024-04-26 14:55:12.301871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.621 14:55:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:30.621 14:55:12 -- common/autotest_common.sh@850 -- # return 0 00:18:30.621 14:55:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:30.621 14:55:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:30.621 14:55:12 -- common/autotest_common.sh@10 -- # set +x 00:18:30.621 14:55:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.621 14:55:12 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.JCZme9uQoS 00:18:30.621 14:55:12 -- target/tls.sh@49 -- # local key=/tmp/tmp.JCZme9uQoS 00:18:30.621 14:55:12 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:30.621 [2024-04-26 14:55:13.135755] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:30.621 14:55:13 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:30.880 14:55:13 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:30.880 [2024-04-26 14:55:13.440495] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:30.880 [2024-04-26 14:55:13.440680] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:30.880 14:55:13 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:31.139 malloc0 00:18:31.139 14:55:13 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:31.139 14:55:13 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JCZme9uQoS 00:18:31.398 [2024-04-26 14:55:13.899539] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:31.398 14:55:13 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JCZme9uQoS 00:18:31.398 14:55:13 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:31.398 14:55:13 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:31.398 14:55:13 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:31.398 14:55:13 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.JCZme9uQoS' 00:18:31.398 14:55:13 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:31.398 14:55:13 -- target/tls.sh@28 -- # bdevperf_pid=1085691 00:18:31.398 14:55:13 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:31.398 14:55:13 -- target/tls.sh@31 -- # waitforlisten 1085691 /var/tmp/bdevperf.sock 00:18:31.398 14:55:13 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:31.398 14:55:13 -- common/autotest_common.sh@817 -- # '[' -z 1085691 ']' 00:18:31.398 14:55:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:31.398 14:55:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:31.398 14:55:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:31.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:31.398 14:55:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:31.398 14:55:13 -- common/autotest_common.sh@10 -- # set +x 00:18:31.398 [2024-04-26 14:55:13.962399] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:31.398 [2024-04-26 14:55:13.962448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1085691 ] 00:18:31.398 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.398 [2024-04-26 14:55:14.012081] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.398 [2024-04-26 14:55:14.062726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.337 14:55:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:32.337 14:55:14 -- common/autotest_common.sh@850 -- # return 0 00:18:32.337 14:55:14 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JCZme9uQoS 00:18:32.337 [2024-04-26 14:55:14.863829] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:32.337 [2024-04-26 14:55:14.863887] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:32.337 TLSTESTn1 00:18:32.337 14:55:14 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:32.596 Running I/O for 10 seconds... 00:18:42.589 00:18:42.589 Latency(us) 00:18:42.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.589 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:42.589 Verification LBA range: start 0x0 length 0x2000 00:18:42.589 TLSTESTn1 : 10.03 5942.20 23.21 0.00 0.00 21492.14 4532.91 29491.20 00:18:42.589 =================================================================================================================== 00:18:42.589 Total : 5942.20 23.21 0.00 0.00 21492.14 4532.91 29491.20 00:18:42.589 0 00:18:42.589 14:55:25 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:42.589 14:55:25 -- target/tls.sh@45 -- # killprocess 1085691 00:18:42.589 14:55:25 -- common/autotest_common.sh@936 -- # '[' -z 1085691 ']' 00:18:42.589 14:55:25 -- common/autotest_common.sh@940 -- # kill -0 1085691 00:18:42.589 14:55:25 -- common/autotest_common.sh@941 -- # uname 00:18:42.589 14:55:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:42.589 14:55:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1085691 00:18:42.589 14:55:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:42.589 14:55:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:42.589 14:55:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1085691' 00:18:42.589 killing process with pid 1085691 00:18:42.589 14:55:25 -- common/autotest_common.sh@955 -- # kill 1085691 00:18:42.589 Received shutdown signal, test time was about 10.000000 seconds 00:18:42.589 00:18:42.589 Latency(us) 00:18:42.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.589 =================================================================================================================== 00:18:42.589 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:42.589 [2024-04-26 14:55:25.180442] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:42.589 14:55:25 -- common/autotest_common.sh@960 -- # wait 1085691 00:18:42.850 14:55:25 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.JCZme9uQoS 00:18:42.850 14:55:25 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JCZme9uQoS 00:18:42.850 14:55:25 -- common/autotest_common.sh@638 -- # local es=0 00:18:42.850 14:55:25 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JCZme9uQoS 00:18:42.850 14:55:25 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:42.850 14:55:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:42.850 14:55:25 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:42.850 14:55:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:42.850 14:55:25 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JCZme9uQoS 00:18:42.850 14:55:25 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:42.850 14:55:25 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:42.850 14:55:25 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:42.850 14:55:25 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.JCZme9uQoS' 00:18:42.850 14:55:25 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:42.850 14:55:25 -- target/tls.sh@28 -- # bdevperf_pid=1087995 00:18:42.850 14:55:25 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:42.850 14:55:25 -- target/tls.sh@31 -- # waitforlisten 1087995 /var/tmp/bdevperf.sock 00:18:42.850 14:55:25 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:42.850 14:55:25 -- common/autotest_common.sh@817 -- # '[' -z 1087995 ']' 00:18:42.850 14:55:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:42.850 14:55:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:42.850 14:55:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:42.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:42.850 14:55:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:42.850 14:55:25 -- common/autotest_common.sh@10 -- # set +x 00:18:42.850 [2024-04-26 14:55:25.347165] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:42.850 [2024-04-26 14:55:25.347223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1087995 ] 00:18:42.850 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.850 [2024-04-26 14:55:25.396615] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.850 [2024-04-26 14:55:25.446711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.791 14:55:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:43.791 14:55:26 -- common/autotest_common.sh@850 -- # return 0 00:18:43.791 14:55:26 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JCZme9uQoS 00:18:43.791 [2024-04-26 14:55:26.247562] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:43.791 [2024-04-26 14:55:26.247600] bdev_nvme.c:6071:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:43.791 [2024-04-26 14:55:26.247605] bdev_nvme.c:6180:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.JCZme9uQoS 00:18:43.791 request: 00:18:43.791 { 00:18:43.791 "name": "TLSTEST", 00:18:43.791 "trtype": "tcp", 00:18:43.791 "traddr": "10.0.0.2", 00:18:43.791 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:43.791 "adrfam": "ipv4", 00:18:43.791 "trsvcid": "4420", 00:18:43.791 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.791 "psk": "/tmp/tmp.JCZme9uQoS", 00:18:43.791 "method": "bdev_nvme_attach_controller", 00:18:43.791 "req_id": 1 00:18:43.791 } 00:18:43.791 Got JSON-RPC error response 00:18:43.791 response: 00:18:43.791 { 00:18:43.791 "code": -1, 00:18:43.791 "message": "Operation not permitted" 00:18:43.791 } 00:18:43.791 14:55:26 -- target/tls.sh@36 -- # killprocess 1087995 00:18:43.791 14:55:26 -- common/autotest_common.sh@936 -- # '[' -z 1087995 ']' 00:18:43.791 14:55:26 -- common/autotest_common.sh@940 -- # kill -0 1087995 00:18:43.791 14:55:26 -- common/autotest_common.sh@941 -- # uname 00:18:43.791 14:55:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:43.791 14:55:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1087995 00:18:43.791 14:55:26 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:43.791 14:55:26 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:43.791 14:55:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1087995' 00:18:43.791 killing process with pid 1087995 00:18:43.791 14:55:26 -- common/autotest_common.sh@955 -- # kill 1087995 00:18:43.791 Received shutdown signal, test time was about 10.000000 seconds 00:18:43.791 00:18:43.791 Latency(us) 00:18:43.791 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.791 =================================================================================================================== 00:18:43.791 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:43.791 14:55:26 -- common/autotest_common.sh@960 -- # wait 1087995 00:18:43.791 14:55:26 -- target/tls.sh@37 -- # return 1 00:18:43.791 14:55:26 -- common/autotest_common.sh@641 -- # es=1 00:18:43.791 14:55:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:43.791 14:55:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:43.791 14:55:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:43.791 14:55:26 -- target/tls.sh@174 -- # killprocess 1085327 00:18:43.791 14:55:26 -- common/autotest_common.sh@936 -- # '[' -z 1085327 ']' 00:18:43.791 14:55:26 -- common/autotest_common.sh@940 -- # kill -0 1085327 00:18:43.791 14:55:26 -- common/autotest_common.sh@941 -- # uname 00:18:43.791 14:55:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:43.791 14:55:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1085327 00:18:44.052 14:55:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:44.052 14:55:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:44.052 14:55:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1085327' 00:18:44.052 killing process with pid 1085327 00:18:44.052 14:55:26 -- common/autotest_common.sh@955 -- # kill 1085327 00:18:44.052 [2024-04-26 14:55:26.493289] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:44.052 14:55:26 -- common/autotest_common.sh@960 -- # wait 1085327 00:18:44.052 14:55:26 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:44.052 14:55:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:44.052 14:55:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:44.052 14:55:26 -- common/autotest_common.sh@10 -- # set +x 00:18:44.052 14:55:26 -- nvmf/common.sh@470 -- # nvmfpid=1088155 00:18:44.052 14:55:26 -- nvmf/common.sh@471 -- # waitforlisten 1088155 00:18:44.052 14:55:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:44.052 14:55:26 -- common/autotest_common.sh@817 -- # '[' -z 1088155 ']' 00:18:44.052 14:55:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.052 14:55:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:44.052 14:55:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.052 14:55:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:44.052 14:55:26 -- common/autotest_common.sh@10 -- # set +x 00:18:44.052 [2024-04-26 14:55:26.666967] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:44.052 [2024-04-26 14:55:26.667021] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.052 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.312 [2024-04-26 14:55:26.746664] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.312 [2024-04-26 14:55:26.799520] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.312 [2024-04-26 14:55:26.799555] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.312 [2024-04-26 14:55:26.799560] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.312 [2024-04-26 14:55:26.799565] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.312 [2024-04-26 14:55:26.799569] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.312 [2024-04-26 14:55:26.799584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.883 14:55:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:44.883 14:55:27 -- common/autotest_common.sh@850 -- # return 0 00:18:44.883 14:55:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:44.883 14:55:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:44.883 14:55:27 -- common/autotest_common.sh@10 -- # set +x 00:18:44.883 14:55:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.883 14:55:27 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.JCZme9uQoS 00:18:44.883 14:55:27 -- common/autotest_common.sh@638 -- # local es=0 00:18:44.883 14:55:27 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.JCZme9uQoS 00:18:44.883 14:55:27 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:18:44.883 14:55:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:44.883 14:55:27 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:18:44.883 14:55:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:44.883 14:55:27 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.JCZme9uQoS 00:18:44.883 14:55:27 -- target/tls.sh@49 -- # local key=/tmp/tmp.JCZme9uQoS 00:18:44.883 14:55:27 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:45.142 [2024-04-26 14:55:27.609544] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.142 14:55:27 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:45.142 14:55:27 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:45.415 [2024-04-26 14:55:27.902260] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:45.415 [2024-04-26 14:55:27.902430] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.415 14:55:27 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:45.415 malloc0 00:18:45.415 14:55:28 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:45.674 14:55:28 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JCZme9uQoS 00:18:45.934 [2024-04-26 14:55:28.340983] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:45.934 [2024-04-26 14:55:28.341002] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:45.934 [2024-04-26 14:55:28.341019] subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:18:45.934 request: 00:18:45.934 { 00:18:45.934 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:45.934 "host": "nqn.2016-06.io.spdk:host1", 00:18:45.934 "psk": "/tmp/tmp.JCZme9uQoS", 00:18:45.934 "method": "nvmf_subsystem_add_host", 00:18:45.934 "req_id": 1 00:18:45.934 } 00:18:45.934 Got JSON-RPC error response 00:18:45.934 response: 00:18:45.934 { 00:18:45.934 "code": -32603, 00:18:45.934 "message": "Internal error" 00:18:45.934 } 00:18:45.934 14:55:28 -- common/autotest_common.sh@641 -- # es=1 00:18:45.934 14:55:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:45.935 14:55:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:45.935 14:55:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:45.935 14:55:28 -- target/tls.sh@180 -- # killprocess 1088155 00:18:45.935 14:55:28 -- common/autotest_common.sh@936 -- # '[' -z 1088155 ']' 00:18:45.935 14:55:28 -- common/autotest_common.sh@940 -- # kill -0 1088155 00:18:45.935 14:55:28 -- common/autotest_common.sh@941 -- # uname 00:18:45.935 14:55:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:45.935 14:55:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1088155 00:18:45.935 14:55:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:45.935 14:55:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:45.935 14:55:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1088155' 00:18:45.935 killing process with pid 1088155 00:18:45.935 14:55:28 -- common/autotest_common.sh@955 -- # kill 1088155 00:18:45.935 14:55:28 -- common/autotest_common.sh@960 -- # wait 1088155 00:18:45.935 14:55:28 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.JCZme9uQoS 00:18:45.935 14:55:28 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:45.935 14:55:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:45.935 14:55:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:45.935 14:55:28 -- common/autotest_common.sh@10 -- # set +x 00:18:45.935 14:55:28 -- nvmf/common.sh@470 -- # nvmfpid=1088613 00:18:45.935 14:55:28 -- nvmf/common.sh@471 -- # waitforlisten 1088613 00:18:45.935 14:55:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:45.935 14:55:28 -- common/autotest_common.sh@817 -- # '[' -z 1088613 ']' 00:18:45.935 14:55:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.935 14:55:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:45.935 14:55:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.935 14:55:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:45.935 14:55:28 -- common/autotest_common.sh@10 -- # set +x 00:18:46.194 [2024-04-26 14:55:28.603289] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:46.194 [2024-04-26 14:55:28.603344] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.194 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.194 [2024-04-26 14:55:28.686737] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.194 [2024-04-26 14:55:28.745469] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.194 [2024-04-26 14:55:28.745503] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.194 [2024-04-26 14:55:28.745509] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.194 [2024-04-26 14:55:28.745513] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.194 [2024-04-26 14:55:28.745518] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.194 [2024-04-26 14:55:28.745539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.765 14:55:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:46.765 14:55:29 -- common/autotest_common.sh@850 -- # return 0 00:18:46.765 14:55:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:46.765 14:55:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:46.765 14:55:29 -- common/autotest_common.sh@10 -- # set +x 00:18:46.765 14:55:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.765 14:55:29 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.JCZme9uQoS 00:18:46.765 14:55:29 -- target/tls.sh@49 -- # local key=/tmp/tmp.JCZme9uQoS 00:18:46.765 14:55:29 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:47.024 [2024-04-26 14:55:29.536968] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.024 14:55:29 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:47.284 14:55:29 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:47.284 [2024-04-26 14:55:29.829688] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:47.284 [2024-04-26 14:55:29.829864] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:47.284 14:55:29 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:47.545 malloc0 00:18:47.545 14:55:29 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:47.545 14:55:30 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JCZme9uQoS 00:18:47.827 [2024-04-26 14:55:30.268579] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:47.827 14:55:30 -- target/tls.sh@188 -- # bdevperf_pid=1088979 00:18:47.827 14:55:30 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:47.827 14:55:30 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:47.827 14:55:30 -- target/tls.sh@191 -- # waitforlisten 1088979 /var/tmp/bdevperf.sock 00:18:47.827 14:55:30 -- common/autotest_common.sh@817 -- # '[' -z 1088979 ']' 00:18:47.827 14:55:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:47.827 14:55:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:47.827 14:55:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:47.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:47.827 14:55:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:47.827 14:55:30 -- common/autotest_common.sh@10 -- # set +x 00:18:47.827 [2024-04-26 14:55:30.330653] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:47.827 [2024-04-26 14:55:30.330706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1088979 ] 00:18:47.827 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.827 [2024-04-26 14:55:30.381153] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.827 [2024-04-26 14:55:30.435346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:48.476 14:55:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:48.476 14:55:31 -- common/autotest_common.sh@850 -- # return 0 00:18:48.476 14:55:31 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JCZme9uQoS 00:18:48.736 [2024-04-26 14:55:31.236294] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:48.736 [2024-04-26 14:55:31.236357] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:48.736 TLSTESTn1 00:18:48.736 14:55:31 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:48.996 14:55:31 -- target/tls.sh@196 -- # tgtconf='{ 00:18:48.996 "subsystems": [ 00:18:48.996 { 00:18:48.996 "subsystem": "keyring", 00:18:48.996 "config": [] 00:18:48.996 }, 00:18:48.996 { 00:18:48.996 "subsystem": "iobuf", 00:18:48.996 "config": [ 00:18:48.996 { 00:18:48.996 "method": "iobuf_set_options", 00:18:48.996 "params": { 00:18:48.996 "small_pool_count": 8192, 00:18:48.996 "large_pool_count": 1024, 00:18:48.996 "small_bufsize": 8192, 00:18:48.996 "large_bufsize": 135168 00:18:48.996 } 00:18:48.996 } 00:18:48.996 ] 00:18:48.996 }, 00:18:48.996 { 00:18:48.996 "subsystem": "sock", 00:18:48.996 "config": [ 00:18:48.996 { 00:18:48.996 "method": "sock_impl_set_options", 00:18:48.996 "params": { 00:18:48.996 "impl_name": "posix", 00:18:48.996 "recv_buf_size": 2097152, 00:18:48.996 "send_buf_size": 2097152, 00:18:48.996 "enable_recv_pipe": true, 00:18:48.996 "enable_quickack": false, 00:18:48.996 "enable_placement_id": 0, 00:18:48.996 "enable_zerocopy_send_server": true, 00:18:48.996 "enable_zerocopy_send_client": false, 00:18:48.996 "zerocopy_threshold": 0, 00:18:48.996 "tls_version": 0, 00:18:48.996 "enable_ktls": false 00:18:48.996 } 00:18:48.996 }, 00:18:48.996 { 00:18:48.996 "method": "sock_impl_set_options", 00:18:48.996 "params": { 00:18:48.996 "impl_name": "ssl", 00:18:48.996 "recv_buf_size": 4096, 00:18:48.996 "send_buf_size": 4096, 00:18:48.996 "enable_recv_pipe": true, 00:18:48.996 "enable_quickack": false, 00:18:48.996 "enable_placement_id": 0, 00:18:48.996 "enable_zerocopy_send_server": true, 00:18:48.996 "enable_zerocopy_send_client": false, 00:18:48.996 "zerocopy_threshold": 0, 00:18:48.996 "tls_version": 0, 00:18:48.996 "enable_ktls": false 00:18:48.996 } 00:18:48.996 } 00:18:48.996 ] 00:18:48.996 }, 00:18:48.996 { 00:18:48.996 "subsystem": "vmd", 00:18:48.996 "config": [] 00:18:48.996 }, 00:18:48.996 { 00:18:48.996 "subsystem": "accel", 00:18:48.997 "config": [ 00:18:48.997 { 00:18:48.997 "method": "accel_set_options", 00:18:48.997 "params": { 00:18:48.997 "small_cache_size": 128, 00:18:48.997 "large_cache_size": 16, 00:18:48.997 "task_count": 2048, 00:18:48.997 "sequence_count": 2048, 00:18:48.997 "buf_count": 2048 00:18:48.997 } 00:18:48.997 } 00:18:48.997 ] 00:18:48.997 }, 00:18:48.997 { 00:18:48.997 "subsystem": "bdev", 00:18:48.997 "config": [ 00:18:48.997 { 00:18:48.997 "method": "bdev_set_options", 00:18:48.997 "params": { 00:18:48.997 "bdev_io_pool_size": 65535, 00:18:48.997 "bdev_io_cache_size": 256, 00:18:48.997 "bdev_auto_examine": true, 00:18:48.997 "iobuf_small_cache_size": 128, 00:18:48.997 "iobuf_large_cache_size": 16 00:18:48.997 } 00:18:48.997 }, 00:18:48.997 { 00:18:48.997 "method": "bdev_raid_set_options", 00:18:48.997 "params": { 00:18:48.997 "process_window_size_kb": 1024 00:18:48.997 } 00:18:48.997 }, 00:18:48.997 { 00:18:48.997 "method": "bdev_iscsi_set_options", 00:18:48.997 "params": { 00:18:48.997 "timeout_sec": 30 00:18:48.997 } 00:18:48.997 }, 00:18:48.997 { 00:18:48.997 "method": "bdev_nvme_set_options", 00:18:48.997 "params": { 00:18:48.997 "action_on_timeout": "none", 00:18:48.997 "timeout_us": 0, 00:18:48.997 "timeout_admin_us": 0, 00:18:48.997 "keep_alive_timeout_ms": 10000, 00:18:48.997 "arbitration_burst": 0, 00:18:48.997 "low_priority_weight": 0, 00:18:48.997 "medium_priority_weight": 0, 00:18:48.997 "high_priority_weight": 0, 00:18:48.997 "nvme_adminq_poll_period_us": 10000, 00:18:48.997 "nvme_ioq_poll_period_us": 0, 00:18:48.997 "io_queue_requests": 0, 00:18:48.997 "delay_cmd_submit": true, 00:18:48.997 "transport_retry_count": 4, 00:18:48.997 "bdev_retry_count": 3, 00:18:48.997 "transport_ack_timeout": 0, 00:18:48.997 "ctrlr_loss_timeout_sec": 0, 00:18:48.997 "reconnect_delay_sec": 0, 00:18:48.997 "fast_io_fail_timeout_sec": 0, 00:18:48.997 "disable_auto_failback": false, 00:18:48.997 "generate_uuids": false, 00:18:48.997 "transport_tos": 0, 00:18:48.997 "nvme_error_stat": false, 00:18:48.997 "rdma_srq_size": 0, 00:18:48.997 "io_path_stat": false, 00:18:48.997 "allow_accel_sequence": false, 00:18:48.997 "rdma_max_cq_size": 0, 00:18:48.997 "rdma_cm_event_timeout_ms": 0, 00:18:48.997 "dhchap_digests": [ 00:18:48.997 "sha256", 00:18:48.997 "sha384", 00:18:48.997 "sha512" 00:18:48.997 ], 00:18:48.997 "dhchap_dhgroups": [ 00:18:48.997 "null", 00:18:48.997 "ffdhe2048", 00:18:48.997 "ffdhe3072", 00:18:48.997 "ffdhe4096", 00:18:48.997 "ffdhe6144", 00:18:48.997 "ffdhe8192" 00:18:48.997 ] 00:18:48.997 } 00:18:48.997 }, 00:18:48.997 { 00:18:48.997 "method": "bdev_nvme_set_hotplug", 00:18:48.997 "params": { 00:18:48.997 "period_us": 100000, 00:18:48.997 "enable": false 00:18:48.997 } 00:18:48.997 }, 00:18:48.997 { 00:18:48.997 "method": "bdev_malloc_create", 00:18:48.997 "params": { 00:18:48.997 "name": "malloc0", 00:18:48.997 "num_blocks": 8192, 00:18:48.997 "block_size": 4096, 00:18:48.997 "physical_block_size": 4096, 00:18:48.997 "uuid": "aaca8e32-6d95-4fdf-b08b-f2df19c69465", 00:18:48.997 "optimal_io_boundary": 0 00:18:48.997 } 00:18:48.997 }, 00:18:48.997 { 00:18:48.997 "method": "bdev_wait_for_examine" 00:18:48.997 } 00:18:48.997 ] 00:18:48.997 }, 00:18:48.997 { 00:18:48.997 "subsystem": "nbd", 00:18:48.997 "config": [] 00:18:48.997 }, 00:18:48.997 { 00:18:48.997 "subsystem": "scheduler", 00:18:48.997 "config": [ 00:18:48.997 { 00:18:48.997 "method": "framework_set_scheduler", 00:18:48.997 "params": { 00:18:48.997 "name": "static" 00:18:48.997 } 00:18:48.997 } 00:18:48.997 ] 00:18:48.997 }, 00:18:48.997 { 00:18:48.997 "subsystem": "nvmf", 00:18:48.997 "config": [ 00:18:48.997 { 00:18:48.997 "method": "nvmf_set_config", 00:18:48.997 "params": { 00:18:48.997 "discovery_filter": "match_any", 00:18:48.997 "admin_cmd_passthru": { 00:18:48.997 "identify_ctrlr": false 00:18:48.997 } 00:18:48.997 } 00:18:48.997 }, 00:18:48.997 { 00:18:48.997 "method": "nvmf_set_max_subsystems", 00:18:48.997 "params": { 00:18:48.997 "max_subsystems": 1024 00:18:48.997 } 00:18:48.997 }, 00:18:48.997 { 00:18:48.997 "method": "nvmf_set_crdt", 00:18:48.997 "params": { 00:18:48.997 "crdt1": 0, 00:18:48.997 "crdt2": 0, 00:18:48.997 "crdt3": 0 00:18:48.997 } 00:18:48.997 }, 00:18:48.997 { 00:18:48.997 "method": "nvmf_create_transport", 00:18:48.997 "params": { 00:18:48.997 "trtype": "TCP", 00:18:48.997 "max_queue_depth": 128, 00:18:48.997 "max_io_qpairs_per_ctrlr": 127, 00:18:48.997 "in_capsule_data_size": 4096, 00:18:48.997 "max_io_size": 131072, 00:18:48.997 "io_unit_size": 131072, 00:18:48.997 "max_aq_depth": 128, 00:18:48.997 "num_shared_buffers": 511, 00:18:48.997 "buf_cache_size": 4294967295, 00:18:48.997 "dif_insert_or_strip": false, 00:18:48.997 "zcopy": false, 00:18:48.997 "c2h_success": false, 00:18:48.997 "sock_priority": 0, 00:18:48.997 "abort_timeout_sec": 1, 00:18:48.997 "ack_timeout": 0, 00:18:48.997 "data_wr_pool_size": 0 00:18:48.997 } 00:18:48.997 }, 00:18:48.997 { 00:18:48.997 "method": "nvmf_create_subsystem", 00:18:48.997 "params": { 00:18:48.997 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.997 "allow_any_host": false, 00:18:48.997 "serial_number": "SPDK00000000000001", 00:18:48.997 "model_number": "SPDK bdev Controller", 00:18:48.997 "max_namespaces": 10, 00:18:48.997 "min_cntlid": 1, 00:18:48.997 "max_cntlid": 65519, 00:18:48.997 "ana_reporting": false 00:18:48.997 } 00:18:48.997 }, 00:18:48.997 { 00:18:48.997 "method": "nvmf_subsystem_add_host", 00:18:48.997 "params": { 00:18:48.997 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.997 "host": "nqn.2016-06.io.spdk:host1", 00:18:48.997 "psk": "/tmp/tmp.JCZme9uQoS" 00:18:48.997 } 00:18:48.997 }, 00:18:48.997 { 00:18:48.997 "method": "nvmf_subsystem_add_ns", 00:18:48.997 "params": { 00:18:48.997 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.997 "namespace": { 00:18:48.997 "nsid": 1, 00:18:48.997 "bdev_name": "malloc0", 00:18:48.997 "nguid": "AACA8E326D954FDFB08BF2DF19C69465", 00:18:48.997 "uuid": "aaca8e32-6d95-4fdf-b08b-f2df19c69465", 00:18:48.997 "no_auto_visible": false 00:18:48.997 } 00:18:48.997 } 00:18:48.997 }, 00:18:48.997 { 00:18:48.997 "method": "nvmf_subsystem_add_listener", 00:18:48.997 "params": { 00:18:48.997 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.997 "listen_address": { 00:18:48.997 "trtype": "TCP", 00:18:48.997 "adrfam": "IPv4", 00:18:48.997 "traddr": "10.0.0.2", 00:18:48.997 "trsvcid": "4420" 00:18:48.997 }, 00:18:48.997 "secure_channel": true 00:18:48.997 } 00:18:48.997 } 00:18:48.997 ] 00:18:48.997 } 00:18:48.997 ] 00:18:48.997 }' 00:18:48.997 14:55:31 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:49.258 14:55:31 -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:49.258 "subsystems": [ 00:18:49.258 { 00:18:49.258 "subsystem": "keyring", 00:18:49.258 "config": [] 00:18:49.258 }, 00:18:49.258 { 00:18:49.258 "subsystem": "iobuf", 00:18:49.258 "config": [ 00:18:49.258 { 00:18:49.258 "method": "iobuf_set_options", 00:18:49.258 "params": { 00:18:49.258 "small_pool_count": 8192, 00:18:49.258 "large_pool_count": 1024, 00:18:49.258 "small_bufsize": 8192, 00:18:49.258 "large_bufsize": 135168 00:18:49.258 } 00:18:49.258 } 00:18:49.258 ] 00:18:49.258 }, 00:18:49.258 { 00:18:49.258 "subsystem": "sock", 00:18:49.258 "config": [ 00:18:49.258 { 00:18:49.258 "method": "sock_impl_set_options", 00:18:49.258 "params": { 00:18:49.258 "impl_name": "posix", 00:18:49.258 "recv_buf_size": 2097152, 00:18:49.258 "send_buf_size": 2097152, 00:18:49.258 "enable_recv_pipe": true, 00:18:49.258 "enable_quickack": false, 00:18:49.258 "enable_placement_id": 0, 00:18:49.258 "enable_zerocopy_send_server": true, 00:18:49.258 "enable_zerocopy_send_client": false, 00:18:49.258 "zerocopy_threshold": 0, 00:18:49.258 "tls_version": 0, 00:18:49.258 "enable_ktls": false 00:18:49.258 } 00:18:49.258 }, 00:18:49.258 { 00:18:49.258 "method": "sock_impl_set_options", 00:18:49.258 "params": { 00:18:49.258 "impl_name": "ssl", 00:18:49.258 "recv_buf_size": 4096, 00:18:49.258 "send_buf_size": 4096, 00:18:49.258 "enable_recv_pipe": true, 00:18:49.258 "enable_quickack": false, 00:18:49.258 "enable_placement_id": 0, 00:18:49.258 "enable_zerocopy_send_server": true, 00:18:49.258 "enable_zerocopy_send_client": false, 00:18:49.258 "zerocopy_threshold": 0, 00:18:49.258 "tls_version": 0, 00:18:49.258 "enable_ktls": false 00:18:49.258 } 00:18:49.258 } 00:18:49.258 ] 00:18:49.258 }, 00:18:49.258 { 00:18:49.258 "subsystem": "vmd", 00:18:49.258 "config": [] 00:18:49.258 }, 00:18:49.258 { 00:18:49.258 "subsystem": "accel", 00:18:49.258 "config": [ 00:18:49.258 { 00:18:49.258 "method": "accel_set_options", 00:18:49.258 "params": { 00:18:49.258 "small_cache_size": 128, 00:18:49.258 "large_cache_size": 16, 00:18:49.258 "task_count": 2048, 00:18:49.258 "sequence_count": 2048, 00:18:49.258 "buf_count": 2048 00:18:49.258 } 00:18:49.258 } 00:18:49.258 ] 00:18:49.258 }, 00:18:49.258 { 00:18:49.258 "subsystem": "bdev", 00:18:49.258 "config": [ 00:18:49.258 { 00:18:49.258 "method": "bdev_set_options", 00:18:49.258 "params": { 00:18:49.258 "bdev_io_pool_size": 65535, 00:18:49.258 "bdev_io_cache_size": 256, 00:18:49.258 "bdev_auto_examine": true, 00:18:49.258 "iobuf_small_cache_size": 128, 00:18:49.258 "iobuf_large_cache_size": 16 00:18:49.258 } 00:18:49.258 }, 00:18:49.258 { 00:18:49.258 "method": "bdev_raid_set_options", 00:18:49.258 "params": { 00:18:49.258 "process_window_size_kb": 1024 00:18:49.258 } 00:18:49.258 }, 00:18:49.258 { 00:18:49.258 "method": "bdev_iscsi_set_options", 00:18:49.258 "params": { 00:18:49.258 "timeout_sec": 30 00:18:49.258 } 00:18:49.258 }, 00:18:49.258 { 00:18:49.258 "method": "bdev_nvme_set_options", 00:18:49.258 "params": { 00:18:49.258 "action_on_timeout": "none", 00:18:49.258 "timeout_us": 0, 00:18:49.258 "timeout_admin_us": 0, 00:18:49.258 "keep_alive_timeout_ms": 10000, 00:18:49.258 "arbitration_burst": 0, 00:18:49.258 "low_priority_weight": 0, 00:18:49.258 "medium_priority_weight": 0, 00:18:49.258 "high_priority_weight": 0, 00:18:49.258 "nvme_adminq_poll_period_us": 10000, 00:18:49.258 "nvme_ioq_poll_period_us": 0, 00:18:49.258 "io_queue_requests": 512, 00:18:49.258 "delay_cmd_submit": true, 00:18:49.258 "transport_retry_count": 4, 00:18:49.258 "bdev_retry_count": 3, 00:18:49.258 "transport_ack_timeout": 0, 00:18:49.258 "ctrlr_loss_timeout_sec": 0, 00:18:49.258 "reconnect_delay_sec": 0, 00:18:49.258 "fast_io_fail_timeout_sec": 0, 00:18:49.258 "disable_auto_failback": false, 00:18:49.258 "generate_uuids": false, 00:18:49.258 "transport_tos": 0, 00:18:49.258 "nvme_error_stat": false, 00:18:49.258 "rdma_srq_size": 0, 00:18:49.258 "io_path_stat": false, 00:18:49.258 "allow_accel_sequence": false, 00:18:49.258 "rdma_max_cq_size": 0, 00:18:49.258 "rdma_cm_event_timeout_ms": 0, 00:18:49.258 "dhchap_digests": [ 00:18:49.258 "sha256", 00:18:49.258 "sha384", 00:18:49.258 "sha512" 00:18:49.258 ], 00:18:49.258 "dhchap_dhgroups": [ 00:18:49.258 "null", 00:18:49.258 "ffdhe2048", 00:18:49.258 "ffdhe3072", 00:18:49.258 "ffdhe4096", 00:18:49.258 "ffdhe6144", 00:18:49.258 "ffdhe8192" 00:18:49.258 ] 00:18:49.258 } 00:18:49.258 }, 00:18:49.258 { 00:18:49.258 "method": "bdev_nvme_attach_controller", 00:18:49.258 "params": { 00:18:49.258 "name": "TLSTEST", 00:18:49.258 "trtype": "TCP", 00:18:49.258 "adrfam": "IPv4", 00:18:49.258 "traddr": "10.0.0.2", 00:18:49.258 "trsvcid": "4420", 00:18:49.258 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.258 "prchk_reftag": false, 00:18:49.258 "prchk_guard": false, 00:18:49.258 "ctrlr_loss_timeout_sec": 0, 00:18:49.258 "reconnect_delay_sec": 0, 00:18:49.258 "fast_io_fail_timeout_sec": 0, 00:18:49.258 "psk": "/tmp/tmp.JCZme9uQoS", 00:18:49.258 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:49.258 "hdgst": false, 00:18:49.258 "ddgst": false 00:18:49.258 } 00:18:49.258 }, 00:18:49.258 { 00:18:49.258 "method": "bdev_nvme_set_hotplug", 00:18:49.258 "params": { 00:18:49.258 "period_us": 100000, 00:18:49.258 "enable": false 00:18:49.258 } 00:18:49.258 }, 00:18:49.258 { 00:18:49.258 "method": "bdev_wait_for_examine" 00:18:49.258 } 00:18:49.258 ] 00:18:49.258 }, 00:18:49.258 { 00:18:49.258 "subsystem": "nbd", 00:18:49.258 "config": [] 00:18:49.258 } 00:18:49.258 ] 00:18:49.258 }' 00:18:49.258 14:55:31 -- target/tls.sh@199 -- # killprocess 1088979 00:18:49.259 14:55:31 -- common/autotest_common.sh@936 -- # '[' -z 1088979 ']' 00:18:49.259 14:55:31 -- common/autotest_common.sh@940 -- # kill -0 1088979 00:18:49.259 14:55:31 -- common/autotest_common.sh@941 -- # uname 00:18:49.259 14:55:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:49.259 14:55:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1088979 00:18:49.259 14:55:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:49.259 14:55:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:49.259 14:55:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1088979' 00:18:49.259 killing process with pid 1088979 00:18:49.259 14:55:31 -- common/autotest_common.sh@955 -- # kill 1088979 00:18:49.259 Received shutdown signal, test time was about 10.000000 seconds 00:18:49.259 00:18:49.259 Latency(us) 00:18:49.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.259 =================================================================================================================== 00:18:49.259 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:49.259 [2024-04-26 14:55:31.872799] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:49.259 14:55:31 -- common/autotest_common.sh@960 -- # wait 1088979 00:18:49.519 14:55:31 -- target/tls.sh@200 -- # killprocess 1088613 00:18:49.519 14:55:31 -- common/autotest_common.sh@936 -- # '[' -z 1088613 ']' 00:18:49.519 14:55:31 -- common/autotest_common.sh@940 -- # kill -0 1088613 00:18:49.519 14:55:31 -- common/autotest_common.sh@941 -- # uname 00:18:49.519 14:55:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:49.519 14:55:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1088613 00:18:49.519 14:55:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:49.519 14:55:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:49.519 14:55:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1088613' 00:18:49.519 killing process with pid 1088613 00:18:49.519 14:55:32 -- common/autotest_common.sh@955 -- # kill 1088613 00:18:49.519 [2024-04-26 14:55:32.043030] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:49.519 14:55:32 -- common/autotest_common.sh@960 -- # wait 1088613 00:18:49.519 14:55:32 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:49.519 14:55:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:49.519 14:55:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:49.519 14:55:32 -- common/autotest_common.sh@10 -- # set +x 00:18:49.519 14:55:32 -- target/tls.sh@203 -- # echo '{ 00:18:49.519 "subsystems": [ 00:18:49.519 { 00:18:49.519 "subsystem": "keyring", 00:18:49.519 "config": [] 00:18:49.519 }, 00:18:49.519 { 00:18:49.519 "subsystem": "iobuf", 00:18:49.519 "config": [ 00:18:49.519 { 00:18:49.519 "method": "iobuf_set_options", 00:18:49.519 "params": { 00:18:49.519 "small_pool_count": 8192, 00:18:49.519 "large_pool_count": 1024, 00:18:49.519 "small_bufsize": 8192, 00:18:49.519 "large_bufsize": 135168 00:18:49.519 } 00:18:49.519 } 00:18:49.519 ] 00:18:49.519 }, 00:18:49.519 { 00:18:49.519 "subsystem": "sock", 00:18:49.519 "config": [ 00:18:49.519 { 00:18:49.519 "method": "sock_impl_set_options", 00:18:49.519 "params": { 00:18:49.519 "impl_name": "posix", 00:18:49.519 "recv_buf_size": 2097152, 00:18:49.519 "send_buf_size": 2097152, 00:18:49.519 "enable_recv_pipe": true, 00:18:49.519 "enable_quickack": false, 00:18:49.519 "enable_placement_id": 0, 00:18:49.519 "enable_zerocopy_send_server": true, 00:18:49.519 "enable_zerocopy_send_client": false, 00:18:49.519 "zerocopy_threshold": 0, 00:18:49.519 "tls_version": 0, 00:18:49.519 "enable_ktls": false 00:18:49.519 } 00:18:49.519 }, 00:18:49.519 { 00:18:49.519 "method": "sock_impl_set_options", 00:18:49.519 "params": { 00:18:49.519 "impl_name": "ssl", 00:18:49.519 "recv_buf_size": 4096, 00:18:49.519 "send_buf_size": 4096, 00:18:49.519 "enable_recv_pipe": true, 00:18:49.519 "enable_quickack": false, 00:18:49.519 "enable_placement_id": 0, 00:18:49.519 "enable_zerocopy_send_server": true, 00:18:49.519 "enable_zerocopy_send_client": false, 00:18:49.519 "zerocopy_threshold": 0, 00:18:49.519 "tls_version": 0, 00:18:49.519 "enable_ktls": false 00:18:49.520 } 00:18:49.520 } 00:18:49.520 ] 00:18:49.520 }, 00:18:49.520 { 00:18:49.520 "subsystem": "vmd", 00:18:49.520 "config": [] 00:18:49.520 }, 00:18:49.520 { 00:18:49.520 "subsystem": "accel", 00:18:49.520 "config": [ 00:18:49.520 { 00:18:49.520 "method": "accel_set_options", 00:18:49.520 "params": { 00:18:49.520 "small_cache_size": 128, 00:18:49.520 "large_cache_size": 16, 00:18:49.520 "task_count": 2048, 00:18:49.520 "sequence_count": 2048, 00:18:49.520 "buf_count": 2048 00:18:49.520 } 00:18:49.520 } 00:18:49.520 ] 00:18:49.520 }, 00:18:49.520 { 00:18:49.520 "subsystem": "bdev", 00:18:49.520 "config": [ 00:18:49.520 { 00:18:49.520 "method": "bdev_set_options", 00:18:49.520 "params": { 00:18:49.520 "bdev_io_pool_size": 65535, 00:18:49.520 "bdev_io_cache_size": 256, 00:18:49.520 "bdev_auto_examine": true, 00:18:49.520 "iobuf_small_cache_size": 128, 00:18:49.520 "iobuf_large_cache_size": 16 00:18:49.520 } 00:18:49.520 }, 00:18:49.520 { 00:18:49.520 "method": "bdev_raid_set_options", 00:18:49.520 "params": { 00:18:49.520 "process_window_size_kb": 1024 00:18:49.520 } 00:18:49.520 }, 00:18:49.520 { 00:18:49.520 "method": "bdev_iscsi_set_options", 00:18:49.520 "params": { 00:18:49.520 "timeout_sec": 30 00:18:49.520 } 00:18:49.520 }, 00:18:49.520 { 00:18:49.520 "method": "bdev_nvme_set_options", 00:18:49.520 "params": { 00:18:49.520 "action_on_timeout": "none", 00:18:49.520 "timeout_us": 0, 00:18:49.520 "timeout_admin_us": 0, 00:18:49.520 "keep_alive_timeout_ms": 10000, 00:18:49.520 "arbitration_burst": 0, 00:18:49.520 "low_priority_weight": 0, 00:18:49.520 "medium_priority_weight": 0, 00:18:49.520 "high_priority_weight": 0, 00:18:49.520 "nvme_adminq_poll_period_us": 10000, 00:18:49.520 "nvme_ioq_poll_period_us": 0, 00:18:49.520 "io_queue_requests": 0, 00:18:49.520 "delay_cmd_submit": true, 00:18:49.520 "transport_retry_count": 4, 00:18:49.520 "bdev_retry_count": 3, 00:18:49.520 "transport_ack_timeout": 0, 00:18:49.520 "ctrlr_loss_timeout_sec": 0, 00:18:49.520 "reconnect_delay_sec": 0, 00:18:49.520 "fast_io_fail_timeout_sec": 0, 00:18:49.520 "disable_auto_failback": false, 00:18:49.520 "generate_uuids": false, 00:18:49.520 "transport_tos": 0, 00:18:49.520 "nvme_error_stat": false, 00:18:49.520 "rdma_srq_size": 0, 00:18:49.520 "io_path_stat": false, 00:18:49.520 "allow_accel_sequence": false, 00:18:49.520 "rdma_max_cq_size": 0, 00:18:49.520 "rdma_cm_event_timeout_ms": 0, 00:18:49.520 "dhchap_digests": [ 00:18:49.520 "sha256", 00:18:49.520 "sha384", 00:18:49.520 "sha512" 00:18:49.520 ], 00:18:49.520 "dhchap_dhgroups": [ 00:18:49.520 "null", 00:18:49.520 "ffdhe2048", 00:18:49.520 "ffdhe3072", 00:18:49.520 "ffdhe4096", 00:18:49.520 "ffdhe6144", 00:18:49.520 "ffdhe8192" 00:18:49.520 ] 00:18:49.520 } 00:18:49.520 }, 00:18:49.520 { 00:18:49.520 "method": "bdev_nvme_set_hotplug", 00:18:49.520 "params": { 00:18:49.520 "period_us": 100000, 00:18:49.520 "enable": false 00:18:49.520 } 00:18:49.520 }, 00:18:49.520 { 00:18:49.520 "method": "bdev_malloc_create", 00:18:49.520 "params": { 00:18:49.520 "name": "malloc0", 00:18:49.520 "num_blocks": 8192, 00:18:49.520 "block_size": 4096, 00:18:49.520 "physical_block_size": 4096, 00:18:49.520 "uuid": "aaca8e32-6d95-4fdf-b08b-f2df19c69465", 00:18:49.520 "optimal_io_boundary": 0 00:18:49.520 } 00:18:49.520 }, 00:18:49.520 { 00:18:49.520 "method": "bdev_wait_for_examine" 00:18:49.520 } 00:18:49.520 ] 00:18:49.520 }, 00:18:49.520 { 00:18:49.520 "subsystem": "nbd", 00:18:49.520 "config": [] 00:18:49.520 }, 00:18:49.520 { 00:18:49.520 "subsystem": "scheduler", 00:18:49.520 "config": [ 00:18:49.520 { 00:18:49.520 "method": "framework_set_scheduler", 00:18:49.520 "params": { 00:18:49.520 "name": "static" 00:18:49.520 } 00:18:49.520 } 00:18:49.520 ] 00:18:49.520 }, 00:18:49.520 { 00:18:49.520 "subsystem": "nvmf", 00:18:49.520 "config": [ 00:18:49.520 { 00:18:49.520 "method": "nvmf_set_config", 00:18:49.520 "params": { 00:18:49.520 "discovery_filter": "match_any", 00:18:49.520 "admin_cmd_passthru": { 00:18:49.520 "identify_ctrlr": false 00:18:49.520 } 00:18:49.520 } 00:18:49.520 }, 00:18:49.520 { 00:18:49.520 "method": "nvmf_set_max_subsystems", 00:18:49.520 "params": { 00:18:49.520 "max_subsystems": 1024 00:18:49.520 } 00:18:49.520 }, 00:18:49.520 { 00:18:49.520 "method": "nvmf_set_crdt", 00:18:49.520 "params": { 00:18:49.520 "crdt1": 0, 00:18:49.520 "crdt2": 0, 00:18:49.520 "crdt3": 0 00:18:49.520 } 00:18:49.520 }, 00:18:49.520 { 00:18:49.520 "method": "nvmf_create_transport", 00:18:49.520 "params": { 00:18:49.520 "trtype": "TCP", 00:18:49.520 "max_queue_depth": 128, 00:18:49.520 "max_io_qpairs_per_ctrlr": 127, 00:18:49.520 "in_capsule_data_size": 4096, 00:18:49.520 "max_io_size": 131072, 00:18:49.520 "io_unit_size": 131072, 00:18:49.520 "max_aq_depth": 128, 00:18:49.520 "num_shared_buffers": 511, 00:18:49.520 "buf_cache_size": 4294967295, 00:18:49.520 "dif_insert_or_strip": false, 00:18:49.520 "zcopy": false, 00:18:49.520 "c2h_success": false, 00:18:49.520 "sock_priority": 0, 00:18:49.520 "abort_timeout_sec": 1, 00:18:49.520 "ack_timeout": 0, 00:18:49.520 "data_wr_pool_size": 0 00:18:49.520 } 00:18:49.520 }, 00:18:49.520 { 00:18:49.520 "method": "nvmf_create_subsystem", 00:18:49.520 "params": { 00:18:49.520 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.520 "allow_any_host": false, 00:18:49.520 "serial_number": "SPDK00000000000001", 00:18:49.520 "model_number": "SPDK bdev Controller", 00:18:49.520 "max_namespaces": 10, 00:18:49.520 "min_cntlid": 1, 00:18:49.520 "max_cntlid": 65519, 00:18:49.520 "ana_reporting": false 00:18:49.520 } 00:18:49.520 }, 00:18:49.520 { 00:18:49.520 "method": "nvmf_subsystem_add_host", 00:18:49.520 "params": { 00:18:49.520 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.520 "host": "nqn.2016-06.io.spdk:host1", 00:18:49.520 "psk": "/tmp/tmp.JCZme9uQoS" 00:18:49.520 } 00:18:49.520 }, 00:18:49.520 { 00:18:49.520 "method": "nvmf_subsystem_add_ns", 00:18:49.520 "params": { 00:18:49.520 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.520 "namespace": { 00:18:49.520 "nsid": 1, 00:18:49.520 "bdev_name": "malloc0", 00:18:49.520 "nguid": "AACA8E326D954FDFB08BF2DF19C69465", 00:18:49.520 "uuid": "aaca8e32-6d95-4fdf-b08b-f2df19c69465", 00:18:49.520 "no_auto_visible": false 00:18:49.520 } 00:18:49.520 } 00:18:49.520 }, 00:18:49.520 { 00:18:49.520 "method": "nvmf_subsystem_add_listener", 00:18:49.520 "params": { 00:18:49.520 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.520 "listen_address": { 00:18:49.520 "trtype": "TCP", 00:18:49.520 "adrfam": "IPv4", 00:18:49.520 "traddr": "10.0.0.2", 00:18:49.520 "trsvcid": "4420" 00:18:49.520 }, 00:18:49.520 "secure_channel": true 00:18:49.520 } 00:18:49.520 } 00:18:49.520 ] 00:18:49.520 } 00:18:49.520 ] 00:18:49.520 }' 00:18:49.520 14:55:32 -- nvmf/common.sh@470 -- # nvmfpid=1089424 00:18:49.520 14:55:32 -- nvmf/common.sh@471 -- # waitforlisten 1089424 00:18:49.520 14:55:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:49.520 14:55:32 -- common/autotest_common.sh@817 -- # '[' -z 1089424 ']' 00:18:49.520 14:55:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.520 14:55:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:49.520 14:55:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.520 14:55:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:49.520 14:55:32 -- common/autotest_common.sh@10 -- # set +x 00:18:49.780 [2024-04-26 14:55:32.224327] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:49.780 [2024-04-26 14:55:32.224414] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.780 EAL: No free 2048 kB hugepages reported on node 1 00:18:49.780 [2024-04-26 14:55:32.311090] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.780 [2024-04-26 14:55:32.364629] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.780 [2024-04-26 14:55:32.364664] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.780 [2024-04-26 14:55:32.364669] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.780 [2024-04-26 14:55:32.364674] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.780 [2024-04-26 14:55:32.364678] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.780 [2024-04-26 14:55:32.364720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.040 [2024-04-26 14:55:32.540093] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.040 [2024-04-26 14:55:32.556063] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:50.040 [2024-04-26 14:55:32.572113] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:50.040 [2024-04-26 14:55:32.581153] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:50.612 14:55:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:50.612 14:55:32 -- common/autotest_common.sh@850 -- # return 0 00:18:50.612 14:55:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:50.612 14:55:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:50.612 14:55:32 -- common/autotest_common.sh@10 -- # set +x 00:18:50.612 14:55:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.612 14:55:33 -- target/tls.sh@207 -- # bdevperf_pid=1089496 00:18:50.612 14:55:33 -- target/tls.sh@208 -- # waitforlisten 1089496 /var/tmp/bdevperf.sock 00:18:50.612 14:55:33 -- common/autotest_common.sh@817 -- # '[' -z 1089496 ']' 00:18:50.612 14:55:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:50.612 14:55:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:50.612 14:55:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:50.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:50.612 14:55:33 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:50.612 14:55:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:50.612 14:55:33 -- common/autotest_common.sh@10 -- # set +x 00:18:50.612 14:55:33 -- target/tls.sh@204 -- # echo '{ 00:18:50.612 "subsystems": [ 00:18:50.612 { 00:18:50.612 "subsystem": "keyring", 00:18:50.612 "config": [] 00:18:50.612 }, 00:18:50.612 { 00:18:50.612 "subsystem": "iobuf", 00:18:50.612 "config": [ 00:18:50.612 { 00:18:50.612 "method": "iobuf_set_options", 00:18:50.612 "params": { 00:18:50.612 "small_pool_count": 8192, 00:18:50.612 "large_pool_count": 1024, 00:18:50.612 "small_bufsize": 8192, 00:18:50.612 "large_bufsize": 135168 00:18:50.612 } 00:18:50.612 } 00:18:50.612 ] 00:18:50.612 }, 00:18:50.612 { 00:18:50.612 "subsystem": "sock", 00:18:50.612 "config": [ 00:18:50.612 { 00:18:50.612 "method": "sock_impl_set_options", 00:18:50.612 "params": { 00:18:50.612 "impl_name": "posix", 00:18:50.612 "recv_buf_size": 2097152, 00:18:50.612 "send_buf_size": 2097152, 00:18:50.612 "enable_recv_pipe": true, 00:18:50.612 "enable_quickack": false, 00:18:50.612 "enable_placement_id": 0, 00:18:50.612 "enable_zerocopy_send_server": true, 00:18:50.612 "enable_zerocopy_send_client": false, 00:18:50.612 "zerocopy_threshold": 0, 00:18:50.612 "tls_version": 0, 00:18:50.612 "enable_ktls": false 00:18:50.612 } 00:18:50.612 }, 00:18:50.612 { 00:18:50.612 "method": "sock_impl_set_options", 00:18:50.612 "params": { 00:18:50.612 "impl_name": "ssl", 00:18:50.612 "recv_buf_size": 4096, 00:18:50.612 "send_buf_size": 4096, 00:18:50.612 "enable_recv_pipe": true, 00:18:50.612 "enable_quickack": false, 00:18:50.612 "enable_placement_id": 0, 00:18:50.613 "enable_zerocopy_send_server": true, 00:18:50.613 "enable_zerocopy_send_client": false, 00:18:50.613 "zerocopy_threshold": 0, 00:18:50.613 "tls_version": 0, 00:18:50.613 "enable_ktls": false 00:18:50.613 } 00:18:50.613 } 00:18:50.613 ] 00:18:50.613 }, 00:18:50.613 { 00:18:50.613 "subsystem": "vmd", 00:18:50.613 "config": [] 00:18:50.613 }, 00:18:50.613 { 00:18:50.613 "subsystem": "accel", 00:18:50.613 "config": [ 00:18:50.613 { 00:18:50.613 "method": "accel_set_options", 00:18:50.613 "params": { 00:18:50.613 "small_cache_size": 128, 00:18:50.613 "large_cache_size": 16, 00:18:50.613 "task_count": 2048, 00:18:50.613 "sequence_count": 2048, 00:18:50.613 "buf_count": 2048 00:18:50.613 } 00:18:50.613 } 00:18:50.613 ] 00:18:50.613 }, 00:18:50.613 { 00:18:50.613 "subsystem": "bdev", 00:18:50.613 "config": [ 00:18:50.613 { 00:18:50.613 "method": "bdev_set_options", 00:18:50.613 "params": { 00:18:50.613 "bdev_io_pool_size": 65535, 00:18:50.613 "bdev_io_cache_size": 256, 00:18:50.613 "bdev_auto_examine": true, 00:18:50.613 "iobuf_small_cache_size": 128, 00:18:50.613 "iobuf_large_cache_size": 16 00:18:50.613 } 00:18:50.613 }, 00:18:50.613 { 00:18:50.613 "method": "bdev_raid_set_options", 00:18:50.613 "params": { 00:18:50.613 "process_window_size_kb": 1024 00:18:50.613 } 00:18:50.613 }, 00:18:50.613 { 00:18:50.613 "method": "bdev_iscsi_set_options", 00:18:50.613 "params": { 00:18:50.613 "timeout_sec": 30 00:18:50.613 } 00:18:50.613 }, 00:18:50.613 { 00:18:50.613 "method": "bdev_nvme_set_options", 00:18:50.613 "params": { 00:18:50.613 "action_on_timeout": "none", 00:18:50.613 "timeout_us": 0, 00:18:50.613 "timeout_admin_us": 0, 00:18:50.613 "keep_alive_timeout_ms": 10000, 00:18:50.613 "arbitration_burst": 0, 00:18:50.613 "low_priority_weight": 0, 00:18:50.613 "medium_priority_weight": 0, 00:18:50.613 "high_priority_weight": 0, 00:18:50.613 "nvme_adminq_poll_period_us": 10000, 00:18:50.613 "nvme_ioq_poll_period_us": 0, 00:18:50.613 "io_queue_requests": 512, 00:18:50.613 "delay_cmd_submit": true, 00:18:50.613 "transport_retry_count": 4, 00:18:50.613 "bdev_retry_count": 3, 00:18:50.613 "transport_ack_timeout": 0, 00:18:50.613 "ctrlr_loss_timeout_sec": 0, 00:18:50.613 "reconnect_delay_sec": 0, 00:18:50.613 "fast_io_fail_timeout_sec": 0, 00:18:50.613 "disable_auto_failback": false, 00:18:50.613 "generate_uuids": false, 00:18:50.613 "transport_tos": 0, 00:18:50.613 "nvme_error_stat": false, 00:18:50.613 "rdma_srq_size": 0, 00:18:50.613 "io_path_stat": false, 00:18:50.613 "allow_accel_sequence": false, 00:18:50.613 "rdma_max_cq_size": 0, 00:18:50.613 "rdma_cm_event_timeout_ms": 0, 00:18:50.613 "dhchap_digests": [ 00:18:50.613 "sha256", 00:18:50.613 "sha384", 00:18:50.613 "sha512" 00:18:50.613 ], 00:18:50.613 "dhchap_dhgroups": [ 00:18:50.613 "null", 00:18:50.613 "ffdhe2048", 00:18:50.613 "ffdhe3072", 00:18:50.613 "ffdhe4096", 00:18:50.613 "ffdhe6144", 00:18:50.613 "ffdhe8192" 00:18:50.613 ] 00:18:50.613 } 00:18:50.613 }, 00:18:50.613 { 00:18:50.613 "method": "bdev_nvme_attach_controller", 00:18:50.613 "params": { 00:18:50.613 "name": "TLSTEST", 00:18:50.613 "trtype": "TCP", 00:18:50.613 "adrfam": "IPv4", 00:18:50.613 "traddr": "10.0.0.2", 00:18:50.613 "trsvcid": "4420", 00:18:50.613 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.613 "prchk_reftag": false, 00:18:50.613 "prchk_guard": false, 00:18:50.613 "ctrlr_loss_timeout_sec": 0, 00:18:50.613 "reconnect_delay_sec": 0, 00:18:50.613 "fast_io_fail_timeout_sec": 0, 00:18:50.613 "psk": "/tmp/tmp.JCZme9uQoS", 00:18:50.613 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:50.613 "hdgst": false, 00:18:50.613 "ddgst": false 00:18:50.613 } 00:18:50.613 }, 00:18:50.613 { 00:18:50.613 "method": "bdev_nvme_set_hotplug", 00:18:50.613 "params": { 00:18:50.613 "period_us": 100000, 00:18:50.613 "enable": false 00:18:50.613 } 00:18:50.613 }, 00:18:50.613 { 00:18:50.613 "method": "bdev_wait_for_examine" 00:18:50.613 } 00:18:50.613 ] 00:18:50.613 }, 00:18:50.613 { 00:18:50.613 "subsystem": "nbd", 00:18:50.613 "config": [] 00:18:50.613 } 00:18:50.613 ] 00:18:50.613 }' 00:18:50.613 [2024-04-26 14:55:33.070288] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:50.613 [2024-04-26 14:55:33.070341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1089496 ] 00:18:50.613 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.613 [2024-04-26 14:55:33.121071] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.613 [2024-04-26 14:55:33.171659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:50.873 [2024-04-26 14:55:33.288338] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:50.873 [2024-04-26 14:55:33.288406] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:51.442 14:55:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:51.442 14:55:33 -- common/autotest_common.sh@850 -- # return 0 00:18:51.442 14:55:33 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:51.442 Running I/O for 10 seconds... 00:19:01.465 00:19:01.465 Latency(us) 00:19:01.465 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.465 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:01.465 Verification LBA range: start 0x0 length 0x2000 00:19:01.465 TLSTESTn1 : 10.01 5817.74 22.73 0.00 0.00 21970.81 4505.60 50899.63 00:19:01.465 =================================================================================================================== 00:19:01.465 Total : 5817.74 22.73 0.00 0.00 21970.81 4505.60 50899.63 00:19:01.465 0 00:19:01.465 14:55:43 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:01.465 14:55:43 -- target/tls.sh@214 -- # killprocess 1089496 00:19:01.465 14:55:43 -- common/autotest_common.sh@936 -- # '[' -z 1089496 ']' 00:19:01.465 14:55:43 -- common/autotest_common.sh@940 -- # kill -0 1089496 00:19:01.465 14:55:43 -- common/autotest_common.sh@941 -- # uname 00:19:01.465 14:55:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:01.465 14:55:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1089496 00:19:01.465 14:55:44 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:01.465 14:55:44 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:01.465 14:55:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1089496' 00:19:01.465 killing process with pid 1089496 00:19:01.465 14:55:44 -- common/autotest_common.sh@955 -- # kill 1089496 00:19:01.465 Received shutdown signal, test time was about 10.000000 seconds 00:19:01.465 00:19:01.465 Latency(us) 00:19:01.465 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.465 =================================================================================================================== 00:19:01.465 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:01.465 [2024-04-26 14:55:44.044115] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:01.465 14:55:44 -- common/autotest_common.sh@960 -- # wait 1089496 00:19:01.725 14:55:44 -- target/tls.sh@215 -- # killprocess 1089424 00:19:01.725 14:55:44 -- common/autotest_common.sh@936 -- # '[' -z 1089424 ']' 00:19:01.725 14:55:44 -- common/autotest_common.sh@940 -- # kill -0 1089424 00:19:01.725 14:55:44 -- common/autotest_common.sh@941 -- # uname 00:19:01.725 14:55:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:01.725 14:55:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1089424 00:19:01.725 14:55:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:01.725 14:55:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:01.725 14:55:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1089424' 00:19:01.725 killing process with pid 1089424 00:19:01.725 14:55:44 -- common/autotest_common.sh@955 -- # kill 1089424 00:19:01.725 [2024-04-26 14:55:44.214389] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:01.725 14:55:44 -- common/autotest_common.sh@960 -- # wait 1089424 00:19:01.725 14:55:44 -- target/tls.sh@218 -- # nvmfappstart 00:19:01.725 14:55:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:01.725 14:55:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:01.725 14:55:44 -- common/autotest_common.sh@10 -- # set +x 00:19:01.725 14:55:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:01.725 14:55:44 -- nvmf/common.sh@470 -- # nvmfpid=1091835 00:19:01.725 14:55:44 -- nvmf/common.sh@471 -- # waitforlisten 1091835 00:19:01.725 14:55:44 -- common/autotest_common.sh@817 -- # '[' -z 1091835 ']' 00:19:01.725 14:55:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.725 14:55:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:01.725 14:55:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.725 14:55:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:01.725 14:55:44 -- common/autotest_common.sh@10 -- # set +x 00:19:01.725 [2024-04-26 14:55:44.366584] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:01.725 [2024-04-26 14:55:44.366628] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.725 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.986 [2024-04-26 14:55:44.421191] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.986 [2024-04-26 14:55:44.483096] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.986 [2024-04-26 14:55:44.483132] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.986 [2024-04-26 14:55:44.483140] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.986 [2024-04-26 14:55:44.483146] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.986 [2024-04-26 14:55:44.483152] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.986 [2024-04-26 14:55:44.483173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.557 14:55:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:02.557 14:55:45 -- common/autotest_common.sh@850 -- # return 0 00:19:02.557 14:55:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:02.557 14:55:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:02.557 14:55:45 -- common/autotest_common.sh@10 -- # set +x 00:19:02.557 14:55:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.557 14:55:45 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.JCZme9uQoS 00:19:02.557 14:55:45 -- target/tls.sh@49 -- # local key=/tmp/tmp.JCZme9uQoS 00:19:02.557 14:55:45 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:02.818 [2024-04-26 14:55:45.322109] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:02.818 14:55:45 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:03.079 14:55:45 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:03.079 [2024-04-26 14:55:45.610829] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:03.079 [2024-04-26 14:55:45.611038] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:03.079 14:55:45 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:03.340 malloc0 00:19:03.340 14:55:45 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:03.340 14:55:45 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JCZme9uQoS 00:19:03.600 [2024-04-26 14:55:46.058778] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:03.600 14:55:46 -- target/tls.sh@222 -- # bdevperf_pid=1092199 00:19:03.600 14:55:46 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:03.600 14:55:46 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:03.600 14:55:46 -- target/tls.sh@225 -- # waitforlisten 1092199 /var/tmp/bdevperf.sock 00:19:03.600 14:55:46 -- common/autotest_common.sh@817 -- # '[' -z 1092199 ']' 00:19:03.600 14:55:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:03.600 14:55:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:03.600 14:55:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:03.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:03.600 14:55:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:03.600 14:55:46 -- common/autotest_common.sh@10 -- # set +x 00:19:03.600 [2024-04-26 14:55:46.119715] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:03.600 [2024-04-26 14:55:46.119763] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1092199 ] 00:19:03.600 EAL: No free 2048 kB hugepages reported on node 1 00:19:03.600 [2024-04-26 14:55:46.196641] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.600 [2024-04-26 14:55:46.248783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.541 14:55:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:04.541 14:55:46 -- common/autotest_common.sh@850 -- # return 0 00:19:04.541 14:55:46 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JCZme9uQoS 00:19:04.541 14:55:47 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:04.541 [2024-04-26 14:55:47.158831] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:04.802 nvme0n1 00:19:04.802 14:55:47 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:04.802 Running I/O for 1 seconds... 00:19:05.744 00:19:05.744 Latency(us) 00:19:05.744 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.744 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:05.744 Verification LBA range: start 0x0 length 0x2000 00:19:05.744 nvme0n1 : 1.05 4811.47 18.79 0.00 0.00 25981.65 4505.60 48496.64 00:19:05.744 =================================================================================================================== 00:19:05.744 Total : 4811.47 18.79 0.00 0.00 25981.65 4505.60 48496.64 00:19:05.744 0 00:19:06.005 14:55:48 -- target/tls.sh@234 -- # killprocess 1092199 00:19:06.005 14:55:48 -- common/autotest_common.sh@936 -- # '[' -z 1092199 ']' 00:19:06.005 14:55:48 -- common/autotest_common.sh@940 -- # kill -0 1092199 00:19:06.005 14:55:48 -- common/autotest_common.sh@941 -- # uname 00:19:06.005 14:55:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:06.005 14:55:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1092199 00:19:06.005 14:55:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:06.005 14:55:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:06.005 14:55:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1092199' 00:19:06.005 killing process with pid 1092199 00:19:06.005 14:55:48 -- common/autotest_common.sh@955 -- # kill 1092199 00:19:06.005 Received shutdown signal, test time was about 1.000000 seconds 00:19:06.005 00:19:06.005 Latency(us) 00:19:06.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.005 =================================================================================================================== 00:19:06.005 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:06.005 14:55:48 -- common/autotest_common.sh@960 -- # wait 1092199 00:19:06.005 14:55:48 -- target/tls.sh@235 -- # killprocess 1091835 00:19:06.005 14:55:48 -- common/autotest_common.sh@936 -- # '[' -z 1091835 ']' 00:19:06.005 14:55:48 -- common/autotest_common.sh@940 -- # kill -0 1091835 00:19:06.005 14:55:48 -- common/autotest_common.sh@941 -- # uname 00:19:06.005 14:55:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:06.005 14:55:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1091835 00:19:06.005 14:55:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:06.005 14:55:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:06.005 14:55:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1091835' 00:19:06.005 killing process with pid 1091835 00:19:06.005 14:55:48 -- common/autotest_common.sh@955 -- # kill 1091835 00:19:06.005 [2024-04-26 14:55:48.639213] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:06.005 14:55:48 -- common/autotest_common.sh@960 -- # wait 1091835 00:19:06.266 14:55:48 -- target/tls.sh@238 -- # nvmfappstart 00:19:06.266 14:55:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:06.266 14:55:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:06.266 14:55:48 -- common/autotest_common.sh@10 -- # set +x 00:19:06.266 14:55:48 -- nvmf/common.sh@470 -- # nvmfpid=1092560 00:19:06.266 14:55:48 -- nvmf/common.sh@471 -- # waitforlisten 1092560 00:19:06.266 14:55:48 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:06.266 14:55:48 -- common/autotest_common.sh@817 -- # '[' -z 1092560 ']' 00:19:06.266 14:55:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.266 14:55:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:06.266 14:55:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.266 14:55:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:06.266 14:55:48 -- common/autotest_common.sh@10 -- # set +x 00:19:06.266 [2024-04-26 14:55:48.832184] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:06.266 [2024-04-26 14:55:48.832241] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.266 EAL: No free 2048 kB hugepages reported on node 1 00:19:06.266 [2024-04-26 14:55:48.896851] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.526 [2024-04-26 14:55:48.959171] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.526 [2024-04-26 14:55:48.959206] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.526 [2024-04-26 14:55:48.959214] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.526 [2024-04-26 14:55:48.959220] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.526 [2024-04-26 14:55:48.959226] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.526 [2024-04-26 14:55:48.959244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.097 14:55:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:07.097 14:55:49 -- common/autotest_common.sh@850 -- # return 0 00:19:07.097 14:55:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:07.097 14:55:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:07.097 14:55:49 -- common/autotest_common.sh@10 -- # set +x 00:19:07.097 14:55:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.097 14:55:49 -- target/tls.sh@239 -- # rpc_cmd 00:19:07.097 14:55:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.097 14:55:49 -- common/autotest_common.sh@10 -- # set +x 00:19:07.097 [2024-04-26 14:55:49.645829] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:07.097 malloc0 00:19:07.097 [2024-04-26 14:55:49.672568] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:07.097 [2024-04-26 14:55:49.672768] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.097 14:55:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.097 14:55:49 -- target/tls.sh@252 -- # bdevperf_pid=1092904 00:19:07.097 14:55:49 -- target/tls.sh@254 -- # waitforlisten 1092904 /var/tmp/bdevperf.sock 00:19:07.097 14:55:49 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:07.097 14:55:49 -- common/autotest_common.sh@817 -- # '[' -z 1092904 ']' 00:19:07.097 14:55:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:07.097 14:55:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:07.097 14:55:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:07.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:07.097 14:55:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:07.097 14:55:49 -- common/autotest_common.sh@10 -- # set +x 00:19:07.097 [2024-04-26 14:55:49.748852] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:07.097 [2024-04-26 14:55:49.748897] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1092904 ] 00:19:07.358 EAL: No free 2048 kB hugepages reported on node 1 00:19:07.358 [2024-04-26 14:55:49.824850] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.358 [2024-04-26 14:55:49.876931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.930 14:55:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:07.930 14:55:50 -- common/autotest_common.sh@850 -- # return 0 00:19:07.930 14:55:50 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JCZme9uQoS 00:19:08.190 14:55:50 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:08.190 [2024-04-26 14:55:50.807090] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:08.451 nvme0n1 00:19:08.451 14:55:50 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:08.451 Running I/O for 1 seconds... 00:19:09.390 00:19:09.390 Latency(us) 00:19:09.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.390 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:09.390 Verification LBA range: start 0x0 length 0x2000 00:19:09.390 nvme0n1 : 1.01 4787.30 18.70 0.00 0.00 26555.29 5870.93 37573.97 00:19:09.390 =================================================================================================================== 00:19:09.390 Total : 4787.30 18.70 0.00 0.00 26555.29 5870.93 37573.97 00:19:09.390 0 00:19:09.390 14:55:52 -- target/tls.sh@263 -- # rpc_cmd save_config 00:19:09.390 14:55:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.390 14:55:52 -- common/autotest_common.sh@10 -- # set +x 00:19:09.650 14:55:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.650 14:55:52 -- target/tls.sh@263 -- # tgtcfg='{ 00:19:09.650 "subsystems": [ 00:19:09.650 { 00:19:09.650 "subsystem": "keyring", 00:19:09.650 "config": [ 00:19:09.650 { 00:19:09.650 "method": "keyring_file_add_key", 00:19:09.650 "params": { 00:19:09.650 "name": "key0", 00:19:09.650 "path": "/tmp/tmp.JCZme9uQoS" 00:19:09.650 } 00:19:09.650 } 00:19:09.650 ] 00:19:09.650 }, 00:19:09.650 { 00:19:09.650 "subsystem": "iobuf", 00:19:09.650 "config": [ 00:19:09.650 { 00:19:09.650 "method": "iobuf_set_options", 00:19:09.650 "params": { 00:19:09.650 "small_pool_count": 8192, 00:19:09.650 "large_pool_count": 1024, 00:19:09.650 "small_bufsize": 8192, 00:19:09.650 "large_bufsize": 135168 00:19:09.650 } 00:19:09.650 } 00:19:09.650 ] 00:19:09.650 }, 00:19:09.650 { 00:19:09.650 "subsystem": "sock", 00:19:09.650 "config": [ 00:19:09.650 { 00:19:09.650 "method": "sock_impl_set_options", 00:19:09.650 "params": { 00:19:09.650 "impl_name": "posix", 00:19:09.650 "recv_buf_size": 2097152, 00:19:09.650 "send_buf_size": 2097152, 00:19:09.650 "enable_recv_pipe": true, 00:19:09.650 "enable_quickack": false, 00:19:09.650 "enable_placement_id": 0, 00:19:09.650 "enable_zerocopy_send_server": true, 00:19:09.650 "enable_zerocopy_send_client": false, 00:19:09.650 "zerocopy_threshold": 0, 00:19:09.650 "tls_version": 0, 00:19:09.650 "enable_ktls": false 00:19:09.650 } 00:19:09.650 }, 00:19:09.650 { 00:19:09.650 "method": "sock_impl_set_options", 00:19:09.650 "params": { 00:19:09.650 "impl_name": "ssl", 00:19:09.650 "recv_buf_size": 4096, 00:19:09.650 "send_buf_size": 4096, 00:19:09.650 "enable_recv_pipe": true, 00:19:09.650 "enable_quickack": false, 00:19:09.650 "enable_placement_id": 0, 00:19:09.650 "enable_zerocopy_send_server": true, 00:19:09.650 "enable_zerocopy_send_client": false, 00:19:09.650 "zerocopy_threshold": 0, 00:19:09.650 "tls_version": 0, 00:19:09.650 "enable_ktls": false 00:19:09.650 } 00:19:09.650 } 00:19:09.650 ] 00:19:09.650 }, 00:19:09.650 { 00:19:09.650 "subsystem": "vmd", 00:19:09.650 "config": [] 00:19:09.650 }, 00:19:09.650 { 00:19:09.650 "subsystem": "accel", 00:19:09.650 "config": [ 00:19:09.650 { 00:19:09.650 "method": "accel_set_options", 00:19:09.650 "params": { 00:19:09.650 "small_cache_size": 128, 00:19:09.650 "large_cache_size": 16, 00:19:09.650 "task_count": 2048, 00:19:09.650 "sequence_count": 2048, 00:19:09.650 "buf_count": 2048 00:19:09.650 } 00:19:09.650 } 00:19:09.650 ] 00:19:09.650 }, 00:19:09.650 { 00:19:09.650 "subsystem": "bdev", 00:19:09.650 "config": [ 00:19:09.650 { 00:19:09.650 "method": "bdev_set_options", 00:19:09.650 "params": { 00:19:09.650 "bdev_io_pool_size": 65535, 00:19:09.650 "bdev_io_cache_size": 256, 00:19:09.650 "bdev_auto_examine": true, 00:19:09.650 "iobuf_small_cache_size": 128, 00:19:09.650 "iobuf_large_cache_size": 16 00:19:09.650 } 00:19:09.650 }, 00:19:09.650 { 00:19:09.650 "method": "bdev_raid_set_options", 00:19:09.650 "params": { 00:19:09.650 "process_window_size_kb": 1024 00:19:09.650 } 00:19:09.650 }, 00:19:09.650 { 00:19:09.650 "method": "bdev_iscsi_set_options", 00:19:09.650 "params": { 00:19:09.650 "timeout_sec": 30 00:19:09.650 } 00:19:09.650 }, 00:19:09.650 { 00:19:09.650 "method": "bdev_nvme_set_options", 00:19:09.650 "params": { 00:19:09.650 "action_on_timeout": "none", 00:19:09.650 "timeout_us": 0, 00:19:09.650 "timeout_admin_us": 0, 00:19:09.650 "keep_alive_timeout_ms": 10000, 00:19:09.650 "arbitration_burst": 0, 00:19:09.650 "low_priority_weight": 0, 00:19:09.650 "medium_priority_weight": 0, 00:19:09.650 "high_priority_weight": 0, 00:19:09.650 "nvme_adminq_poll_period_us": 10000, 00:19:09.650 "nvme_ioq_poll_period_us": 0, 00:19:09.650 "io_queue_requests": 0, 00:19:09.650 "delay_cmd_submit": true, 00:19:09.650 "transport_retry_count": 4, 00:19:09.650 "bdev_retry_count": 3, 00:19:09.650 "transport_ack_timeout": 0, 00:19:09.650 "ctrlr_loss_timeout_sec": 0, 00:19:09.650 "reconnect_delay_sec": 0, 00:19:09.650 "fast_io_fail_timeout_sec": 0, 00:19:09.650 "disable_auto_failback": false, 00:19:09.650 "generate_uuids": false, 00:19:09.650 "transport_tos": 0, 00:19:09.650 "nvme_error_stat": false, 00:19:09.650 "rdma_srq_size": 0, 00:19:09.650 "io_path_stat": false, 00:19:09.650 "allow_accel_sequence": false, 00:19:09.650 "rdma_max_cq_size": 0, 00:19:09.650 "rdma_cm_event_timeout_ms": 0, 00:19:09.650 "dhchap_digests": [ 00:19:09.650 "sha256", 00:19:09.650 "sha384", 00:19:09.650 "sha512" 00:19:09.651 ], 00:19:09.651 "dhchap_dhgroups": [ 00:19:09.651 "null", 00:19:09.651 "ffdhe2048", 00:19:09.651 "ffdhe3072", 00:19:09.651 "ffdhe4096", 00:19:09.651 "ffdhe6144", 00:19:09.651 "ffdhe8192" 00:19:09.651 ] 00:19:09.651 } 00:19:09.651 }, 00:19:09.651 { 00:19:09.651 "method": "bdev_nvme_set_hotplug", 00:19:09.651 "params": { 00:19:09.651 "period_us": 100000, 00:19:09.651 "enable": false 00:19:09.651 } 00:19:09.651 }, 00:19:09.651 { 00:19:09.651 "method": "bdev_malloc_create", 00:19:09.651 "params": { 00:19:09.651 "name": "malloc0", 00:19:09.651 "num_blocks": 8192, 00:19:09.651 "block_size": 4096, 00:19:09.651 "physical_block_size": 4096, 00:19:09.651 "uuid": "cb96b493-aea0-4ea3-b6d6-5d2d7e61f038", 00:19:09.651 "optimal_io_boundary": 0 00:19:09.651 } 00:19:09.651 }, 00:19:09.651 { 00:19:09.651 "method": "bdev_wait_for_examine" 00:19:09.651 } 00:19:09.651 ] 00:19:09.651 }, 00:19:09.651 { 00:19:09.651 "subsystem": "nbd", 00:19:09.651 "config": [] 00:19:09.651 }, 00:19:09.651 { 00:19:09.651 "subsystem": "scheduler", 00:19:09.651 "config": [ 00:19:09.651 { 00:19:09.651 "method": "framework_set_scheduler", 00:19:09.651 "params": { 00:19:09.651 "name": "static" 00:19:09.651 } 00:19:09.651 } 00:19:09.651 ] 00:19:09.651 }, 00:19:09.651 { 00:19:09.651 "subsystem": "nvmf", 00:19:09.651 "config": [ 00:19:09.651 { 00:19:09.651 "method": "nvmf_set_config", 00:19:09.651 "params": { 00:19:09.651 "discovery_filter": "match_any", 00:19:09.651 "admin_cmd_passthru": { 00:19:09.651 "identify_ctrlr": false 00:19:09.651 } 00:19:09.651 } 00:19:09.651 }, 00:19:09.651 { 00:19:09.651 "method": "nvmf_set_max_subsystems", 00:19:09.651 "params": { 00:19:09.651 "max_subsystems": 1024 00:19:09.651 } 00:19:09.651 }, 00:19:09.651 { 00:19:09.651 "method": "nvmf_set_crdt", 00:19:09.651 "params": { 00:19:09.651 "crdt1": 0, 00:19:09.651 "crdt2": 0, 00:19:09.651 "crdt3": 0 00:19:09.651 } 00:19:09.651 }, 00:19:09.651 { 00:19:09.651 "method": "nvmf_create_transport", 00:19:09.651 "params": { 00:19:09.651 "trtype": "TCP", 00:19:09.651 "max_queue_depth": 128, 00:19:09.651 "max_io_qpairs_per_ctrlr": 127, 00:19:09.651 "in_capsule_data_size": 4096, 00:19:09.651 "max_io_size": 131072, 00:19:09.651 "io_unit_size": 131072, 00:19:09.651 "max_aq_depth": 128, 00:19:09.651 "num_shared_buffers": 511, 00:19:09.651 "buf_cache_size": 4294967295, 00:19:09.651 "dif_insert_or_strip": false, 00:19:09.651 "zcopy": false, 00:19:09.651 "c2h_success": false, 00:19:09.651 "sock_priority": 0, 00:19:09.651 "abort_timeout_sec": 1, 00:19:09.651 "ack_timeout": 0, 00:19:09.651 "data_wr_pool_size": 0 00:19:09.651 } 00:19:09.651 }, 00:19:09.651 { 00:19:09.651 "method": "nvmf_create_subsystem", 00:19:09.651 "params": { 00:19:09.651 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.651 "allow_any_host": false, 00:19:09.651 "serial_number": "00000000000000000000", 00:19:09.651 "model_number": "SPDK bdev Controller", 00:19:09.651 "max_namespaces": 32, 00:19:09.651 "min_cntlid": 1, 00:19:09.651 "max_cntlid": 65519, 00:19:09.651 "ana_reporting": false 00:19:09.651 } 00:19:09.651 }, 00:19:09.651 { 00:19:09.651 "method": "nvmf_subsystem_add_host", 00:19:09.651 "params": { 00:19:09.651 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.651 "host": "nqn.2016-06.io.spdk:host1", 00:19:09.651 "psk": "key0" 00:19:09.651 } 00:19:09.651 }, 00:19:09.651 { 00:19:09.651 "method": "nvmf_subsystem_add_ns", 00:19:09.651 "params": { 00:19:09.651 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.651 "namespace": { 00:19:09.651 "nsid": 1, 00:19:09.651 "bdev_name": "malloc0", 00:19:09.651 "nguid": "CB96B493AEA04EA3B6D65D2D7E61F038", 00:19:09.651 "uuid": "cb96b493-aea0-4ea3-b6d6-5d2d7e61f038", 00:19:09.651 "no_auto_visible": false 00:19:09.651 } 00:19:09.651 } 00:19:09.651 }, 00:19:09.651 { 00:19:09.651 "method": "nvmf_subsystem_add_listener", 00:19:09.651 "params": { 00:19:09.651 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.651 "listen_address": { 00:19:09.651 "trtype": "TCP", 00:19:09.651 "adrfam": "IPv4", 00:19:09.651 "traddr": "10.0.0.2", 00:19:09.651 "trsvcid": "4420" 00:19:09.651 }, 00:19:09.651 "secure_channel": true 00:19:09.651 } 00:19:09.651 } 00:19:09.651 ] 00:19:09.651 } 00:19:09.651 ] 00:19:09.651 }' 00:19:09.651 14:55:52 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:09.911 14:55:52 -- target/tls.sh@264 -- # bperfcfg='{ 00:19:09.911 "subsystems": [ 00:19:09.911 { 00:19:09.911 "subsystem": "keyring", 00:19:09.911 "config": [ 00:19:09.911 { 00:19:09.911 "method": "keyring_file_add_key", 00:19:09.911 "params": { 00:19:09.911 "name": "key0", 00:19:09.911 "path": "/tmp/tmp.JCZme9uQoS" 00:19:09.911 } 00:19:09.911 } 00:19:09.911 ] 00:19:09.911 }, 00:19:09.911 { 00:19:09.911 "subsystem": "iobuf", 00:19:09.911 "config": [ 00:19:09.911 { 00:19:09.911 "method": "iobuf_set_options", 00:19:09.911 "params": { 00:19:09.911 "small_pool_count": 8192, 00:19:09.911 "large_pool_count": 1024, 00:19:09.911 "small_bufsize": 8192, 00:19:09.911 "large_bufsize": 135168 00:19:09.911 } 00:19:09.911 } 00:19:09.911 ] 00:19:09.911 }, 00:19:09.911 { 00:19:09.911 "subsystem": "sock", 00:19:09.911 "config": [ 00:19:09.911 { 00:19:09.911 "method": "sock_impl_set_options", 00:19:09.911 "params": { 00:19:09.911 "impl_name": "posix", 00:19:09.911 "recv_buf_size": 2097152, 00:19:09.911 "send_buf_size": 2097152, 00:19:09.911 "enable_recv_pipe": true, 00:19:09.911 "enable_quickack": false, 00:19:09.911 "enable_placement_id": 0, 00:19:09.911 "enable_zerocopy_send_server": true, 00:19:09.911 "enable_zerocopy_send_client": false, 00:19:09.911 "zerocopy_threshold": 0, 00:19:09.911 "tls_version": 0, 00:19:09.911 "enable_ktls": false 00:19:09.911 } 00:19:09.911 }, 00:19:09.911 { 00:19:09.911 "method": "sock_impl_set_options", 00:19:09.911 "params": { 00:19:09.911 "impl_name": "ssl", 00:19:09.911 "recv_buf_size": 4096, 00:19:09.911 "send_buf_size": 4096, 00:19:09.911 "enable_recv_pipe": true, 00:19:09.911 "enable_quickack": false, 00:19:09.911 "enable_placement_id": 0, 00:19:09.911 "enable_zerocopy_send_server": true, 00:19:09.911 "enable_zerocopy_send_client": false, 00:19:09.911 "zerocopy_threshold": 0, 00:19:09.911 "tls_version": 0, 00:19:09.911 "enable_ktls": false 00:19:09.911 } 00:19:09.911 } 00:19:09.911 ] 00:19:09.911 }, 00:19:09.911 { 00:19:09.911 "subsystem": "vmd", 00:19:09.911 "config": [] 00:19:09.911 }, 00:19:09.911 { 00:19:09.911 "subsystem": "accel", 00:19:09.911 "config": [ 00:19:09.911 { 00:19:09.911 "method": "accel_set_options", 00:19:09.911 "params": { 00:19:09.911 "small_cache_size": 128, 00:19:09.911 "large_cache_size": 16, 00:19:09.911 "task_count": 2048, 00:19:09.911 "sequence_count": 2048, 00:19:09.911 "buf_count": 2048 00:19:09.911 } 00:19:09.911 } 00:19:09.911 ] 00:19:09.911 }, 00:19:09.911 { 00:19:09.911 "subsystem": "bdev", 00:19:09.911 "config": [ 00:19:09.911 { 00:19:09.911 "method": "bdev_set_options", 00:19:09.911 "params": { 00:19:09.911 "bdev_io_pool_size": 65535, 00:19:09.911 "bdev_io_cache_size": 256, 00:19:09.911 "bdev_auto_examine": true, 00:19:09.911 "iobuf_small_cache_size": 128, 00:19:09.911 "iobuf_large_cache_size": 16 00:19:09.911 } 00:19:09.911 }, 00:19:09.911 { 00:19:09.911 "method": "bdev_raid_set_options", 00:19:09.911 "params": { 00:19:09.911 "process_window_size_kb": 1024 00:19:09.911 } 00:19:09.911 }, 00:19:09.911 { 00:19:09.911 "method": "bdev_iscsi_set_options", 00:19:09.911 "params": { 00:19:09.911 "timeout_sec": 30 00:19:09.911 } 00:19:09.911 }, 00:19:09.911 { 00:19:09.911 "method": "bdev_nvme_set_options", 00:19:09.911 "params": { 00:19:09.911 "action_on_timeout": "none", 00:19:09.911 "timeout_us": 0, 00:19:09.911 "timeout_admin_us": 0, 00:19:09.911 "keep_alive_timeout_ms": 10000, 00:19:09.911 "arbitration_burst": 0, 00:19:09.911 "low_priority_weight": 0, 00:19:09.911 "medium_priority_weight": 0, 00:19:09.911 "high_priority_weight": 0, 00:19:09.911 "nvme_adminq_poll_period_us": 10000, 00:19:09.911 "nvme_ioq_poll_period_us": 0, 00:19:09.911 "io_queue_requests": 512, 00:19:09.911 "delay_cmd_submit": true, 00:19:09.911 "transport_retry_count": 4, 00:19:09.911 "bdev_retry_count": 3, 00:19:09.911 "transport_ack_timeout": 0, 00:19:09.911 "ctrlr_loss_timeout_sec": 0, 00:19:09.911 "reconnect_delay_sec": 0, 00:19:09.911 "fast_io_fail_timeout_sec": 0, 00:19:09.911 "disable_auto_failback": false, 00:19:09.911 "generate_uuids": false, 00:19:09.911 "transport_tos": 0, 00:19:09.911 "nvme_error_stat": false, 00:19:09.911 "rdma_srq_size": 0, 00:19:09.911 "io_path_stat": false, 00:19:09.911 "allow_accel_sequence": false, 00:19:09.911 "rdma_max_cq_size": 0, 00:19:09.911 "rdma_cm_event_timeout_ms": 0, 00:19:09.911 "dhchap_digests": [ 00:19:09.911 "sha256", 00:19:09.911 "sha384", 00:19:09.911 "sha512" 00:19:09.911 ], 00:19:09.911 "dhchap_dhgroups": [ 00:19:09.911 "null", 00:19:09.911 "ffdhe2048", 00:19:09.911 "ffdhe3072", 00:19:09.911 "ffdhe4096", 00:19:09.911 "ffdhe6144", 00:19:09.911 "ffdhe8192" 00:19:09.911 ] 00:19:09.911 } 00:19:09.911 }, 00:19:09.911 { 00:19:09.911 "method": "bdev_nvme_attach_controller", 00:19:09.911 "params": { 00:19:09.911 "name": "nvme0", 00:19:09.911 "trtype": "TCP", 00:19:09.911 "adrfam": "IPv4", 00:19:09.911 "traddr": "10.0.0.2", 00:19:09.911 "trsvcid": "4420", 00:19:09.911 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.911 "prchk_reftag": false, 00:19:09.911 "prchk_guard": false, 00:19:09.911 "ctrlr_loss_timeout_sec": 0, 00:19:09.911 "reconnect_delay_sec": 0, 00:19:09.911 "fast_io_fail_timeout_sec": 0, 00:19:09.911 "psk": "key0", 00:19:09.911 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:09.911 "hdgst": false, 00:19:09.911 "ddgst": false 00:19:09.911 } 00:19:09.911 }, 00:19:09.911 { 00:19:09.911 "method": "bdev_nvme_set_hotplug", 00:19:09.911 "params": { 00:19:09.911 "period_us": 100000, 00:19:09.911 "enable": false 00:19:09.911 } 00:19:09.911 }, 00:19:09.911 { 00:19:09.911 "method": "bdev_enable_histogram", 00:19:09.911 "params": { 00:19:09.911 "name": "nvme0n1", 00:19:09.911 "enable": true 00:19:09.911 } 00:19:09.911 }, 00:19:09.911 { 00:19:09.911 "method": "bdev_wait_for_examine" 00:19:09.911 } 00:19:09.911 ] 00:19:09.911 }, 00:19:09.911 { 00:19:09.911 "subsystem": "nbd", 00:19:09.911 "config": [] 00:19:09.911 } 00:19:09.911 ] 00:19:09.911 }' 00:19:09.911 14:55:52 -- target/tls.sh@266 -- # killprocess 1092904 00:19:09.911 14:55:52 -- common/autotest_common.sh@936 -- # '[' -z 1092904 ']' 00:19:09.911 14:55:52 -- common/autotest_common.sh@940 -- # kill -0 1092904 00:19:09.911 14:55:52 -- common/autotest_common.sh@941 -- # uname 00:19:09.911 14:55:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:09.911 14:55:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1092904 00:19:09.911 14:55:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:09.911 14:55:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:09.911 14:55:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1092904' 00:19:09.911 killing process with pid 1092904 00:19:09.911 14:55:52 -- common/autotest_common.sh@955 -- # kill 1092904 00:19:09.911 Received shutdown signal, test time was about 1.000000 seconds 00:19:09.911 00:19:09.911 Latency(us) 00:19:09.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.911 =================================================================================================================== 00:19:09.911 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:09.911 14:55:52 -- common/autotest_common.sh@960 -- # wait 1092904 00:19:09.911 14:55:52 -- target/tls.sh@267 -- # killprocess 1092560 00:19:09.911 14:55:52 -- common/autotest_common.sh@936 -- # '[' -z 1092560 ']' 00:19:09.911 14:55:52 -- common/autotest_common.sh@940 -- # kill -0 1092560 00:19:09.911 14:55:52 -- common/autotest_common.sh@941 -- # uname 00:19:09.911 14:55:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:09.911 14:55:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1092560 00:19:10.172 14:55:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:10.172 14:55:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:10.172 14:55:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1092560' 00:19:10.172 killing process with pid 1092560 00:19:10.172 14:55:52 -- common/autotest_common.sh@955 -- # kill 1092560 00:19:10.172 14:55:52 -- common/autotest_common.sh@960 -- # wait 1092560 00:19:10.172 14:55:52 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:19:10.172 14:55:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:10.172 14:55:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:10.172 14:55:52 -- common/autotest_common.sh@10 -- # set +x 00:19:10.172 14:55:52 -- target/tls.sh@269 -- # echo '{ 00:19:10.172 "subsystems": [ 00:19:10.172 { 00:19:10.172 "subsystem": "keyring", 00:19:10.172 "config": [ 00:19:10.172 { 00:19:10.172 "method": "keyring_file_add_key", 00:19:10.172 "params": { 00:19:10.172 "name": "key0", 00:19:10.172 "path": "/tmp/tmp.JCZme9uQoS" 00:19:10.172 } 00:19:10.172 } 00:19:10.172 ] 00:19:10.172 }, 00:19:10.172 { 00:19:10.172 "subsystem": "iobuf", 00:19:10.172 "config": [ 00:19:10.172 { 00:19:10.172 "method": "iobuf_set_options", 00:19:10.172 "params": { 00:19:10.172 "small_pool_count": 8192, 00:19:10.172 "large_pool_count": 1024, 00:19:10.172 "small_bufsize": 8192, 00:19:10.172 "large_bufsize": 135168 00:19:10.172 } 00:19:10.172 } 00:19:10.172 ] 00:19:10.172 }, 00:19:10.172 { 00:19:10.172 "subsystem": "sock", 00:19:10.172 "config": [ 00:19:10.172 { 00:19:10.172 "method": "sock_impl_set_options", 00:19:10.172 "params": { 00:19:10.172 "impl_name": "posix", 00:19:10.172 "recv_buf_size": 2097152, 00:19:10.172 "send_buf_size": 2097152, 00:19:10.172 "enable_recv_pipe": true, 00:19:10.172 "enable_quickack": false, 00:19:10.172 "enable_placement_id": 0, 00:19:10.172 "enable_zerocopy_send_server": true, 00:19:10.172 "enable_zerocopy_send_client": false, 00:19:10.172 "zerocopy_threshold": 0, 00:19:10.172 "tls_version": 0, 00:19:10.172 "enable_ktls": false 00:19:10.172 } 00:19:10.172 }, 00:19:10.172 { 00:19:10.172 "method": "sock_impl_set_options", 00:19:10.172 "params": { 00:19:10.172 "impl_name": "ssl", 00:19:10.172 "recv_buf_size": 4096, 00:19:10.172 "send_buf_size": 4096, 00:19:10.172 "enable_recv_pipe": true, 00:19:10.172 "enable_quickack": false, 00:19:10.172 "enable_placement_id": 0, 00:19:10.172 "enable_zerocopy_send_server": true, 00:19:10.172 "enable_zerocopy_send_client": false, 00:19:10.172 "zerocopy_threshold": 0, 00:19:10.172 "tls_version": 0, 00:19:10.172 "enable_ktls": false 00:19:10.172 } 00:19:10.172 } 00:19:10.172 ] 00:19:10.172 }, 00:19:10.172 { 00:19:10.172 "subsystem": "vmd", 00:19:10.172 "config": [] 00:19:10.172 }, 00:19:10.172 { 00:19:10.172 "subsystem": "accel", 00:19:10.172 "config": [ 00:19:10.172 { 00:19:10.172 "method": "accel_set_options", 00:19:10.172 "params": { 00:19:10.172 "small_cache_size": 128, 00:19:10.172 "large_cache_size": 16, 00:19:10.172 "task_count": 2048, 00:19:10.172 "sequence_count": 2048, 00:19:10.172 "buf_count": 2048 00:19:10.172 } 00:19:10.172 } 00:19:10.172 ] 00:19:10.172 }, 00:19:10.172 { 00:19:10.172 "subsystem": "bdev", 00:19:10.172 "config": [ 00:19:10.172 { 00:19:10.172 "method": "bdev_set_options", 00:19:10.172 "params": { 00:19:10.172 "bdev_io_pool_size": 65535, 00:19:10.172 "bdev_io_cache_size": 256, 00:19:10.172 "bdev_auto_examine": true, 00:19:10.172 "iobuf_small_cache_size": 128, 00:19:10.172 "iobuf_large_cache_size": 16 00:19:10.172 } 00:19:10.172 }, 00:19:10.172 { 00:19:10.172 "method": "bdev_raid_set_options", 00:19:10.172 "params": { 00:19:10.172 "process_window_size_kb": 1024 00:19:10.172 } 00:19:10.172 }, 00:19:10.172 { 00:19:10.172 "method": "bdev_iscsi_set_options", 00:19:10.172 "params": { 00:19:10.172 "timeout_sec": 30 00:19:10.172 } 00:19:10.172 }, 00:19:10.172 { 00:19:10.172 "method": "bdev_nvme_set_options", 00:19:10.172 "params": { 00:19:10.172 "action_on_timeout": "none", 00:19:10.172 "timeout_us": 0, 00:19:10.172 "timeout_admin_us": 0, 00:19:10.172 "keep_alive_timeout_ms": 10000, 00:19:10.172 "arbitration_burst": 0, 00:19:10.172 "low_priority_weight": 0, 00:19:10.172 "medium_priority_weight": 0, 00:19:10.172 "high_priority_weight": 0, 00:19:10.172 "nvme_adminq_poll_period_us": 10000, 00:19:10.172 "nvme_ioq_poll_period_us": 0, 00:19:10.172 "io_queue_requests": 0, 00:19:10.172 "delay_cmd_submit": true, 00:19:10.172 "transport_retry_count": 4, 00:19:10.172 "bdev_retry_count": 3, 00:19:10.172 "transport_ack_timeout": 0, 00:19:10.172 "ctrlr_loss_timeout_sec": 0, 00:19:10.172 "reconnect_delay_sec": 0, 00:19:10.172 "fast_io_fail_timeout_sec": 0, 00:19:10.172 "disable_auto_failback": false, 00:19:10.172 "generate_uuids": false, 00:19:10.172 "transport_tos": 0, 00:19:10.172 "nvme_error_stat": false, 00:19:10.172 "rdma_srq_size": 0, 00:19:10.172 "io_path_stat": false, 00:19:10.172 "allow_accel_sequence": false, 00:19:10.172 "rdma_max_cq_size": 0, 00:19:10.172 "rdma_cm_event_timeout_ms": 0, 00:19:10.172 "dhchap_digests": [ 00:19:10.172 "sha256", 00:19:10.172 "sha384", 00:19:10.172 "sha512" 00:19:10.172 ], 00:19:10.172 "dhchap_dhgroups": [ 00:19:10.172 "null", 00:19:10.172 "ffdhe2048", 00:19:10.172 "ffdhe3072", 00:19:10.172 "ffdhe4096", 00:19:10.172 "ffdhe6144", 00:19:10.172 "ffdhe8192" 00:19:10.172 ] 00:19:10.172 } 00:19:10.172 }, 00:19:10.172 { 00:19:10.172 "method": "bdev_nvme_set_hotplug", 00:19:10.172 "params": { 00:19:10.172 "period_us": 100000, 00:19:10.172 "enable": false 00:19:10.172 } 00:19:10.172 }, 00:19:10.172 { 00:19:10.172 "method": "bdev_malloc_create", 00:19:10.172 "params": { 00:19:10.172 "name": "malloc0", 00:19:10.172 "num_blocks": 8192, 00:19:10.172 "block_size": 4096, 00:19:10.172 "physical_block_size": 4096, 00:19:10.172 "uuid": "cb96b493-aea0-4ea3-b6d6-5d2d7e61f038", 00:19:10.172 "optimal_io_boundary": 0 00:19:10.172 } 00:19:10.172 }, 00:19:10.173 { 00:19:10.173 "method": "bdev_wait_for_examine" 00:19:10.173 } 00:19:10.173 ] 00:19:10.173 }, 00:19:10.173 { 00:19:10.173 "subsystem": "nbd", 00:19:10.173 "config": [] 00:19:10.173 }, 00:19:10.173 { 00:19:10.173 "subsystem": "scheduler", 00:19:10.173 "config": [ 00:19:10.173 { 00:19:10.173 "method": "framework_set_scheduler", 00:19:10.173 "params": { 00:19:10.173 "name": "static" 00:19:10.173 } 00:19:10.173 } 00:19:10.173 ] 00:19:10.173 }, 00:19:10.173 { 00:19:10.173 "subsystem": "nvmf", 00:19:10.173 "config": [ 00:19:10.173 { 00:19:10.173 "method": "nvmf_set_config", 00:19:10.173 "params": { 00:19:10.173 "discovery_filter": "match_any", 00:19:10.173 "admin_cmd_passthru": { 00:19:10.173 "identify_ctrlr": false 00:19:10.173 } 00:19:10.173 } 00:19:10.173 }, 00:19:10.173 { 00:19:10.173 "method": "nvmf_set_max_subsystems", 00:19:10.173 "params": { 00:19:10.173 "max_subsystems": 1024 00:19:10.173 } 00:19:10.173 }, 00:19:10.173 { 00:19:10.173 "method": "nvmf_set_crdt", 00:19:10.173 "params": { 00:19:10.173 "crdt1": 0, 00:19:10.173 "crdt2": 0, 00:19:10.173 "crdt3": 0 00:19:10.173 } 00:19:10.173 }, 00:19:10.173 { 00:19:10.173 "method": "nvmf_create_transport", 00:19:10.173 "params": { 00:19:10.173 "trtype": "TCP", 00:19:10.173 "max_queue_depth": 128, 00:19:10.173 "max_io_qpairs_per_ctrlr": 127, 00:19:10.173 "in_capsule_data_size": 4096, 00:19:10.173 "max_io_size": 131072, 00:19:10.173 "io_unit_size": 131072, 00:19:10.173 "max_aq_depth": 128, 00:19:10.173 "num_shared_buffers": 511, 00:19:10.173 "buf_cache_size": 4294967295, 00:19:10.173 "dif_insert_or_strip": false, 00:19:10.173 "zcopy": false, 00:19:10.173 "c2h_success": false, 00:19:10.173 "sock_priority": 0, 00:19:10.173 "abort_timeout_sec": 1, 00:19:10.173 "ack_timeout": 0, 00:19:10.173 "data_wr_pool_size": 0 00:19:10.173 } 00:19:10.173 }, 00:19:10.173 { 00:19:10.173 "method": "nvmf_create_subsystem", 00:19:10.173 "params": { 00:19:10.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:10.173 "allow_any_host": false, 00:19:10.173 "serial_number": "00000000000000000000", 00:19:10.173 "model_number": "SPDK bdev Controller", 00:19:10.173 "max_namespaces": 32, 00:19:10.173 "min_cntlid": 1, 00:19:10.173 "max_cntlid": 65519, 00:19:10.173 "ana_reporting": false 00:19:10.173 } 00:19:10.173 }, 00:19:10.173 { 00:19:10.173 "method": "nvmf_subsystem_add_host", 00:19:10.173 "params": { 00:19:10.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:10.173 "host": "nqn.2016-06.io.spdk:host1", 00:19:10.173 "psk": "key0" 00:19:10.173 } 00:19:10.173 }, 00:19:10.173 { 00:19:10.173 "method": "nvmf_subsystem_add_ns", 00:19:10.173 "params": { 00:19:10.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:10.173 "namespace": { 00:19:10.173 "nsid": 1, 00:19:10.173 "bdev_name": "malloc0", 00:19:10.173 "nguid": "CB96B493AEA04EA3B6D65D2D7E61F038", 00:19:10.173 "uuid": "cb96b493-aea0-4ea3-b6d6-5d2d7e61f038", 00:19:10.173 "no_auto_visible": false 00:19:10.173 } 00:19:10.173 } 00:19:10.173 }, 00:19:10.173 { 00:19:10.173 "method": "nvmf_subsystem_add_listener", 00:19:10.173 "params": { 00:19:10.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:10.173 "listen_address": { 00:19:10.173 "trtype": "TCP", 00:19:10.173 "adrfam": "IPv4", 00:19:10.173 "traddr": "10.0.0.2", 00:19:10.173 "trsvcid": "4420" 00:19:10.173 }, 00:19:10.173 "secure_channel": true 00:19:10.173 } 00:19:10.173 } 00:19:10.173 ] 00:19:10.173 } 00:19:10.173 ] 00:19:10.173 }' 00:19:10.173 14:55:52 -- nvmf/common.sh@470 -- # nvmfpid=1093530 00:19:10.173 14:55:52 -- nvmf/common.sh@471 -- # waitforlisten 1093530 00:19:10.173 14:55:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:10.173 14:55:52 -- common/autotest_common.sh@817 -- # '[' -z 1093530 ']' 00:19:10.173 14:55:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.173 14:55:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:10.173 14:55:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.173 14:55:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:10.173 14:55:52 -- common/autotest_common.sh@10 -- # set +x 00:19:10.173 [2024-04-26 14:55:52.783540] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:10.173 [2024-04-26 14:55:52.783595] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.173 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.434 [2024-04-26 14:55:52.850605] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.434 [2024-04-26 14:55:52.913076] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:10.434 [2024-04-26 14:55:52.913115] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:10.434 [2024-04-26 14:55:52.913123] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:10.434 [2024-04-26 14:55:52.913129] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:10.434 [2024-04-26 14:55:52.913135] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:10.434 [2024-04-26 14:55:52.913191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.694 [2024-04-26 14:55:53.102393] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:10.694 [2024-04-26 14:55:53.134397] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:10.694 [2024-04-26 14:55:53.143150] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.955 14:55:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:10.955 14:55:53 -- common/autotest_common.sh@850 -- # return 0 00:19:10.955 14:55:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:10.955 14:55:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:10.955 14:55:53 -- common/autotest_common.sh@10 -- # set +x 00:19:11.217 14:55:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.217 14:55:53 -- target/tls.sh@272 -- # bdevperf_pid=1093620 00:19:11.217 14:55:53 -- target/tls.sh@273 -- # waitforlisten 1093620 /var/tmp/bdevperf.sock 00:19:11.217 14:55:53 -- common/autotest_common.sh@817 -- # '[' -z 1093620 ']' 00:19:11.217 14:55:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:11.217 14:55:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:11.217 14:55:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:11.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:11.217 14:55:53 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:11.217 14:55:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:11.217 14:55:53 -- common/autotest_common.sh@10 -- # set +x 00:19:11.217 14:55:53 -- target/tls.sh@270 -- # echo '{ 00:19:11.217 "subsystems": [ 00:19:11.217 { 00:19:11.217 "subsystem": "keyring", 00:19:11.217 "config": [ 00:19:11.217 { 00:19:11.217 "method": "keyring_file_add_key", 00:19:11.217 "params": { 00:19:11.217 "name": "key0", 00:19:11.217 "path": "/tmp/tmp.JCZme9uQoS" 00:19:11.217 } 00:19:11.217 } 00:19:11.217 ] 00:19:11.217 }, 00:19:11.217 { 00:19:11.217 "subsystem": "iobuf", 00:19:11.217 "config": [ 00:19:11.217 { 00:19:11.217 "method": "iobuf_set_options", 00:19:11.217 "params": { 00:19:11.217 "small_pool_count": 8192, 00:19:11.217 "large_pool_count": 1024, 00:19:11.217 "small_bufsize": 8192, 00:19:11.217 "large_bufsize": 135168 00:19:11.217 } 00:19:11.217 } 00:19:11.217 ] 00:19:11.217 }, 00:19:11.217 { 00:19:11.217 "subsystem": "sock", 00:19:11.217 "config": [ 00:19:11.217 { 00:19:11.217 "method": "sock_impl_set_options", 00:19:11.217 "params": { 00:19:11.217 "impl_name": "posix", 00:19:11.217 "recv_buf_size": 2097152, 00:19:11.217 "send_buf_size": 2097152, 00:19:11.217 "enable_recv_pipe": true, 00:19:11.217 "enable_quickack": false, 00:19:11.217 "enable_placement_id": 0, 00:19:11.217 "enable_zerocopy_send_server": true, 00:19:11.217 "enable_zerocopy_send_client": false, 00:19:11.217 "zerocopy_threshold": 0, 00:19:11.217 "tls_version": 0, 00:19:11.217 "enable_ktls": false 00:19:11.217 } 00:19:11.217 }, 00:19:11.217 { 00:19:11.217 "method": "sock_impl_set_options", 00:19:11.217 "params": { 00:19:11.217 "impl_name": "ssl", 00:19:11.217 "recv_buf_size": 4096, 00:19:11.217 "send_buf_size": 4096, 00:19:11.217 "enable_recv_pipe": true, 00:19:11.217 "enable_quickack": false, 00:19:11.217 "enable_placement_id": 0, 00:19:11.217 "enable_zerocopy_send_server": true, 00:19:11.217 "enable_zerocopy_send_client": false, 00:19:11.217 "zerocopy_threshold": 0, 00:19:11.217 "tls_version": 0, 00:19:11.217 "enable_ktls": false 00:19:11.217 } 00:19:11.217 } 00:19:11.217 ] 00:19:11.217 }, 00:19:11.217 { 00:19:11.217 "subsystem": "vmd", 00:19:11.217 "config": [] 00:19:11.217 }, 00:19:11.217 { 00:19:11.217 "subsystem": "accel", 00:19:11.217 "config": [ 00:19:11.217 { 00:19:11.217 "method": "accel_set_options", 00:19:11.217 "params": { 00:19:11.217 "small_cache_size": 128, 00:19:11.217 "large_cache_size": 16, 00:19:11.217 "task_count": 2048, 00:19:11.217 "sequence_count": 2048, 00:19:11.217 "buf_count": 2048 00:19:11.217 } 00:19:11.217 } 00:19:11.217 ] 00:19:11.217 }, 00:19:11.217 { 00:19:11.217 "subsystem": "bdev", 00:19:11.217 "config": [ 00:19:11.217 { 00:19:11.217 "method": "bdev_set_options", 00:19:11.217 "params": { 00:19:11.217 "bdev_io_pool_size": 65535, 00:19:11.217 "bdev_io_cache_size": 256, 00:19:11.217 "bdev_auto_examine": true, 00:19:11.217 "iobuf_small_cache_size": 128, 00:19:11.217 "iobuf_large_cache_size": 16 00:19:11.217 } 00:19:11.217 }, 00:19:11.217 { 00:19:11.217 "method": "bdev_raid_set_options", 00:19:11.217 "params": { 00:19:11.217 "process_window_size_kb": 1024 00:19:11.217 } 00:19:11.217 }, 00:19:11.217 { 00:19:11.217 "method": "bdev_iscsi_set_options", 00:19:11.217 "params": { 00:19:11.218 "timeout_sec": 30 00:19:11.218 } 00:19:11.218 }, 00:19:11.218 { 00:19:11.218 "method": "bdev_nvme_set_options", 00:19:11.218 "params": { 00:19:11.218 "action_on_timeout": "none", 00:19:11.218 "timeout_us": 0, 00:19:11.218 "timeout_admin_us": 0, 00:19:11.218 "keep_alive_timeout_ms": 10000, 00:19:11.218 "arbitration_burst": 0, 00:19:11.218 "low_priority_weight": 0, 00:19:11.218 "medium_priority_weight": 0, 00:19:11.218 "high_priority_weight": 0, 00:19:11.218 "nvme_adminq_poll_period_us": 10000, 00:19:11.218 "nvme_ioq_poll_period_us": 0, 00:19:11.218 "io_queue_requests": 512, 00:19:11.218 "delay_cmd_submit": true, 00:19:11.218 "transport_retry_count": 4, 00:19:11.218 "bdev_retry_count": 3, 00:19:11.218 "transport_ack_timeout": 0, 00:19:11.218 "ctrlr_loss_timeout_sec": 0, 00:19:11.218 "reconnect_delay_sec": 0, 00:19:11.218 "fast_io_fail_timeout_sec": 0, 00:19:11.218 "disable_auto_failback": false, 00:19:11.218 "generate_uuids": false, 00:19:11.218 "transport_tos": 0, 00:19:11.218 "nvme_error_stat": false, 00:19:11.218 "rdma_srq_size": 0, 00:19:11.218 "io_path_stat": false, 00:19:11.218 "allow_accel_sequence": false, 00:19:11.218 "rdma_max_cq_size": 0, 00:19:11.218 "rdma_cm_event_timeout_ms": 0, 00:19:11.218 "dhchap_digests": [ 00:19:11.218 "sha256", 00:19:11.218 "sha384", 00:19:11.218 "sha512" 00:19:11.218 ], 00:19:11.218 "dhchap_dhgroups": [ 00:19:11.218 "null", 00:19:11.218 "ffdhe2048", 00:19:11.218 "ffdhe3072", 00:19:11.218 "ffdhe4096", 00:19:11.218 "ffdhe6144", 00:19:11.218 "ffdhe8192" 00:19:11.218 ] 00:19:11.218 } 00:19:11.218 }, 00:19:11.218 { 00:19:11.218 "method": "bdev_nvme_attach_controller", 00:19:11.218 "params": { 00:19:11.218 "name": "nvme0", 00:19:11.218 "trtype": "TCP", 00:19:11.218 "adrfam": "IPv4", 00:19:11.218 "traddr": "10.0.0.2", 00:19:11.218 "trsvcid": "4420", 00:19:11.218 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.218 "prchk_reftag": false, 00:19:11.218 "prchk_guard": false, 00:19:11.218 "ctrlr_loss_timeout_sec": 0, 00:19:11.218 "reconnect_delay_sec": 0, 00:19:11.218 "fast_io_fail_timeout_sec": 0, 00:19:11.218 "psk": "key0", 00:19:11.218 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:11.218 "hdgst": false, 00:19:11.218 "ddgst": false 00:19:11.218 } 00:19:11.218 }, 00:19:11.218 { 00:19:11.218 "method": "bdev_nvme_set_hotplug", 00:19:11.218 "params": { 00:19:11.218 "period_us": 100000, 00:19:11.218 "enable": false 00:19:11.218 } 00:19:11.218 }, 00:19:11.218 { 00:19:11.218 "method": "bdev_enable_histogram", 00:19:11.218 "params": { 00:19:11.218 "name": "nvme0n1", 00:19:11.218 "enable": true 00:19:11.218 } 00:19:11.218 }, 00:19:11.218 { 00:19:11.218 "method": "bdev_wait_for_examine" 00:19:11.218 } 00:19:11.218 ] 00:19:11.218 }, 00:19:11.218 { 00:19:11.218 "subsystem": "nbd", 00:19:11.218 "config": [] 00:19:11.218 } 00:19:11.218 ] 00:19:11.218 }' 00:19:11.218 [2024-04-26 14:55:53.698268] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:11.218 [2024-04-26 14:55:53.698317] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1093620 ] 00:19:11.218 EAL: No free 2048 kB hugepages reported on node 1 00:19:11.218 [2024-04-26 14:55:53.770956] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.218 [2024-04-26 14:55:53.823185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.479 [2024-04-26 14:55:53.949157] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:12.051 14:55:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:12.051 14:55:54 -- common/autotest_common.sh@850 -- # return 0 00:19:12.051 14:55:54 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:12.051 14:55:54 -- target/tls.sh@275 -- # jq -r '.[].name' 00:19:12.051 14:55:54 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.051 14:55:54 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:12.051 Running I/O for 1 seconds... 00:19:13.436 00:19:13.436 Latency(us) 00:19:13.436 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.436 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:13.436 Verification LBA range: start 0x0 length 0x2000 00:19:13.436 nvme0n1 : 1.05 5448.77 21.28 0.00 0.00 22945.27 5734.40 48933.55 00:19:13.436 =================================================================================================================== 00:19:13.436 Total : 5448.77 21.28 0.00 0.00 22945.27 5734.40 48933.55 00:19:13.436 0 00:19:13.436 14:55:55 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:19:13.436 14:55:55 -- target/tls.sh@279 -- # cleanup 00:19:13.436 14:55:55 -- target/tls.sh@15 -- # process_shm --id 0 00:19:13.436 14:55:55 -- common/autotest_common.sh@794 -- # type=--id 00:19:13.436 14:55:55 -- common/autotest_common.sh@795 -- # id=0 00:19:13.436 14:55:55 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:19:13.436 14:55:55 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:13.436 14:55:55 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:19:13.436 14:55:55 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:19:13.436 14:55:55 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:19:13.436 14:55:55 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:13.436 nvmf_trace.0 00:19:13.436 14:55:55 -- common/autotest_common.sh@809 -- # return 0 00:19:13.436 14:55:55 -- target/tls.sh@16 -- # killprocess 1093620 00:19:13.436 14:55:55 -- common/autotest_common.sh@936 -- # '[' -z 1093620 ']' 00:19:13.436 14:55:55 -- common/autotest_common.sh@940 -- # kill -0 1093620 00:19:13.436 14:55:55 -- common/autotest_common.sh@941 -- # uname 00:19:13.436 14:55:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:13.436 14:55:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1093620 00:19:13.436 14:55:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:13.436 14:55:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:13.436 14:55:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1093620' 00:19:13.436 killing process with pid 1093620 00:19:13.436 14:55:55 -- common/autotest_common.sh@955 -- # kill 1093620 00:19:13.436 Received shutdown signal, test time was about 1.000000 seconds 00:19:13.436 00:19:13.436 Latency(us) 00:19:13.436 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.436 =================================================================================================================== 00:19:13.436 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:13.436 14:55:55 -- common/autotest_common.sh@960 -- # wait 1093620 00:19:13.436 14:55:55 -- target/tls.sh@17 -- # nvmftestfini 00:19:13.436 14:55:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:13.436 14:55:55 -- nvmf/common.sh@117 -- # sync 00:19:13.436 14:55:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:13.436 14:55:55 -- nvmf/common.sh@120 -- # set +e 00:19:13.436 14:55:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:13.436 14:55:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:13.436 rmmod nvme_tcp 00:19:13.436 rmmod nvme_fabrics 00:19:13.436 rmmod nvme_keyring 00:19:13.436 14:55:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:13.436 14:55:56 -- nvmf/common.sh@124 -- # set -e 00:19:13.436 14:55:56 -- nvmf/common.sh@125 -- # return 0 00:19:13.436 14:55:56 -- nvmf/common.sh@478 -- # '[' -n 1093530 ']' 00:19:13.436 14:55:56 -- nvmf/common.sh@479 -- # killprocess 1093530 00:19:13.436 14:55:56 -- common/autotest_common.sh@936 -- # '[' -z 1093530 ']' 00:19:13.436 14:55:56 -- common/autotest_common.sh@940 -- # kill -0 1093530 00:19:13.436 14:55:56 -- common/autotest_common.sh@941 -- # uname 00:19:13.436 14:55:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:13.436 14:55:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1093530 00:19:13.436 14:55:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:13.436 14:55:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:13.436 14:55:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1093530' 00:19:13.436 killing process with pid 1093530 00:19:13.436 14:55:56 -- common/autotest_common.sh@955 -- # kill 1093530 00:19:13.436 14:55:56 -- common/autotest_common.sh@960 -- # wait 1093530 00:19:13.697 14:55:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:13.697 14:55:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:13.697 14:55:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:13.697 14:55:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:13.697 14:55:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:13.697 14:55:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.697 14:55:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:13.697 14:55:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.257 14:55:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:16.257 14:55:58 -- target/tls.sh@18 -- # rm -f /tmp/tmp.218joURRRe /tmp/tmp.Jjjmc5kuAq /tmp/tmp.JCZme9uQoS 00:19:16.257 00:19:16.257 real 1m24.127s 00:19:16.257 user 2m11.143s 00:19:16.257 sys 0m25.498s 00:19:16.257 14:55:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:16.257 14:55:58 -- common/autotest_common.sh@10 -- # set +x 00:19:16.257 ************************************ 00:19:16.257 END TEST nvmf_tls 00:19:16.257 ************************************ 00:19:16.257 14:55:58 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:16.257 14:55:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:16.257 14:55:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:16.257 14:55:58 -- common/autotest_common.sh@10 -- # set +x 00:19:16.257 ************************************ 00:19:16.257 START TEST nvmf_fips 00:19:16.257 ************************************ 00:19:16.258 14:55:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:16.258 * Looking for test storage... 00:19:16.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:16.258 14:55:58 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:16.258 14:55:58 -- nvmf/common.sh@7 -- # uname -s 00:19:16.258 14:55:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.258 14:55:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.258 14:55:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.258 14:55:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.258 14:55:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:16.258 14:55:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:16.258 14:55:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.258 14:55:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:16.258 14:55:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.258 14:55:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:16.258 14:55:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:16.258 14:55:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:16.258 14:55:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.258 14:55:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:16.258 14:55:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:16.258 14:55:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:16.258 14:55:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:16.258 14:55:58 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.258 14:55:58 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.258 14:55:58 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.258 14:55:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.258 14:55:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.258 14:55:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.258 14:55:58 -- paths/export.sh@5 -- # export PATH 00:19:16.258 14:55:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.258 14:55:58 -- nvmf/common.sh@47 -- # : 0 00:19:16.258 14:55:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:16.258 14:55:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:16.258 14:55:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:16.258 14:55:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.258 14:55:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.258 14:55:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:16.258 14:55:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:16.258 14:55:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:16.258 14:55:58 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:16.258 14:55:58 -- fips/fips.sh@89 -- # check_openssl_version 00:19:16.258 14:55:58 -- fips/fips.sh@83 -- # local target=3.0.0 00:19:16.258 14:55:58 -- fips/fips.sh@85 -- # openssl version 00:19:16.258 14:55:58 -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:16.258 14:55:58 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:16.258 14:55:58 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:16.258 14:55:58 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:16.258 14:55:58 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:16.258 14:55:58 -- scripts/common.sh@333 -- # IFS=.-: 00:19:16.258 14:55:58 -- scripts/common.sh@333 -- # read -ra ver1 00:19:16.258 14:55:58 -- scripts/common.sh@334 -- # IFS=.-: 00:19:16.258 14:55:58 -- scripts/common.sh@334 -- # read -ra ver2 00:19:16.258 14:55:58 -- scripts/common.sh@335 -- # local 'op=>=' 00:19:16.258 14:55:58 -- scripts/common.sh@337 -- # ver1_l=3 00:19:16.258 14:55:58 -- scripts/common.sh@338 -- # ver2_l=3 00:19:16.258 14:55:58 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:16.258 14:55:58 -- scripts/common.sh@341 -- # case "$op" in 00:19:16.258 14:55:58 -- scripts/common.sh@345 -- # : 1 00:19:16.258 14:55:58 -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:16.258 14:55:58 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:16.258 14:55:58 -- scripts/common.sh@362 -- # decimal 3 00:19:16.258 14:55:58 -- scripts/common.sh@350 -- # local d=3 00:19:16.258 14:55:58 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:16.258 14:55:58 -- scripts/common.sh@352 -- # echo 3 00:19:16.258 14:55:58 -- scripts/common.sh@362 -- # ver1[v]=3 00:19:16.258 14:55:58 -- scripts/common.sh@363 -- # decimal 3 00:19:16.258 14:55:58 -- scripts/common.sh@350 -- # local d=3 00:19:16.258 14:55:58 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:16.258 14:55:58 -- scripts/common.sh@352 -- # echo 3 00:19:16.258 14:55:58 -- scripts/common.sh@363 -- # ver2[v]=3 00:19:16.258 14:55:58 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:16.258 14:55:58 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:16.258 14:55:58 -- scripts/common.sh@361 -- # (( v++ )) 00:19:16.258 14:55:58 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:16.258 14:55:58 -- scripts/common.sh@362 -- # decimal 0 00:19:16.258 14:55:58 -- scripts/common.sh@350 -- # local d=0 00:19:16.258 14:55:58 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:16.258 14:55:58 -- scripts/common.sh@352 -- # echo 0 00:19:16.258 14:55:58 -- scripts/common.sh@362 -- # ver1[v]=0 00:19:16.258 14:55:58 -- scripts/common.sh@363 -- # decimal 0 00:19:16.258 14:55:58 -- scripts/common.sh@350 -- # local d=0 00:19:16.258 14:55:58 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:16.258 14:55:58 -- scripts/common.sh@352 -- # echo 0 00:19:16.258 14:55:58 -- scripts/common.sh@363 -- # ver2[v]=0 00:19:16.258 14:55:58 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:16.258 14:55:58 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:16.258 14:55:58 -- scripts/common.sh@361 -- # (( v++ )) 00:19:16.258 14:55:58 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:16.258 14:55:58 -- scripts/common.sh@362 -- # decimal 9 00:19:16.258 14:55:58 -- scripts/common.sh@350 -- # local d=9 00:19:16.258 14:55:58 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:16.258 14:55:58 -- scripts/common.sh@352 -- # echo 9 00:19:16.258 14:55:58 -- scripts/common.sh@362 -- # ver1[v]=9 00:19:16.258 14:55:58 -- scripts/common.sh@363 -- # decimal 0 00:19:16.258 14:55:58 -- scripts/common.sh@350 -- # local d=0 00:19:16.258 14:55:58 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:16.258 14:55:58 -- scripts/common.sh@352 -- # echo 0 00:19:16.258 14:55:58 -- scripts/common.sh@363 -- # ver2[v]=0 00:19:16.258 14:55:58 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:16.258 14:55:58 -- scripts/common.sh@364 -- # return 0 00:19:16.258 14:55:58 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:16.258 14:55:58 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:16.258 14:55:58 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:16.258 14:55:58 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:16.258 14:55:58 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:16.258 14:55:58 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:16.258 14:55:58 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:16.258 14:55:58 -- fips/fips.sh@113 -- # build_openssl_config 00:19:16.258 14:55:58 -- fips/fips.sh@37 -- # cat 00:19:16.258 14:55:58 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:16.258 14:55:58 -- fips/fips.sh@58 -- # cat - 00:19:16.258 14:55:58 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:16.258 14:55:58 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:16.258 14:55:58 -- fips/fips.sh@116 -- # mapfile -t providers 00:19:16.258 14:55:58 -- fips/fips.sh@116 -- # openssl list -providers 00:19:16.258 14:55:58 -- fips/fips.sh@116 -- # grep name 00:19:16.258 14:55:58 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:16.258 14:55:58 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:16.258 14:55:58 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:16.258 14:55:58 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:16.258 14:55:58 -- common/autotest_common.sh@638 -- # local es=0 00:19:16.258 14:55:58 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:16.258 14:55:58 -- fips/fips.sh@127 -- # : 00:19:16.258 14:55:58 -- common/autotest_common.sh@626 -- # local arg=openssl 00:19:16.258 14:55:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:16.258 14:55:58 -- common/autotest_common.sh@630 -- # type -t openssl 00:19:16.258 14:55:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:16.258 14:55:58 -- common/autotest_common.sh@632 -- # type -P openssl 00:19:16.258 14:55:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:16.258 14:55:58 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:19:16.258 14:55:58 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:19:16.258 14:55:58 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:19:16.258 Error setting digest 00:19:16.259 0052939D397F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:16.259 0052939D397F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:16.259 14:55:58 -- common/autotest_common.sh@641 -- # es=1 00:19:16.259 14:55:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:16.259 14:55:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:16.259 14:55:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:16.259 14:55:58 -- fips/fips.sh@130 -- # nvmftestinit 00:19:16.259 14:55:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:16.259 14:55:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:16.259 14:55:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:16.259 14:55:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:16.259 14:55:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:16.259 14:55:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.259 14:55:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:16.259 14:55:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.259 14:55:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:16.259 14:55:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:16.259 14:55:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:16.259 14:55:58 -- common/autotest_common.sh@10 -- # set +x 00:19:22.846 14:56:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:22.846 14:56:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:22.846 14:56:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:22.846 14:56:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:22.846 14:56:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:22.846 14:56:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:22.846 14:56:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:22.846 14:56:05 -- nvmf/common.sh@295 -- # net_devs=() 00:19:22.846 14:56:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:22.846 14:56:05 -- nvmf/common.sh@296 -- # e810=() 00:19:22.846 14:56:05 -- nvmf/common.sh@296 -- # local -ga e810 00:19:22.846 14:56:05 -- nvmf/common.sh@297 -- # x722=() 00:19:22.846 14:56:05 -- nvmf/common.sh@297 -- # local -ga x722 00:19:22.846 14:56:05 -- nvmf/common.sh@298 -- # mlx=() 00:19:22.846 14:56:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:22.846 14:56:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:22.846 14:56:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:22.846 14:56:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:22.846 14:56:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:22.846 14:56:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:22.846 14:56:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:22.846 14:56:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:22.846 14:56:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:22.846 14:56:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:22.846 14:56:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:22.846 14:56:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:22.846 14:56:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:22.846 14:56:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:22.846 14:56:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:22.846 14:56:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:22.846 14:56:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:22.846 14:56:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:22.846 14:56:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:22.846 14:56:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:22.846 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:22.846 14:56:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:22.846 14:56:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:22.846 14:56:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.846 14:56:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.846 14:56:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:22.846 14:56:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:22.846 14:56:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:22.846 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:22.846 14:56:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:22.846 14:56:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:22.846 14:56:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.846 14:56:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.846 14:56:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:22.846 14:56:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:22.846 14:56:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:22.846 14:56:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:22.846 14:56:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:22.846 14:56:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.846 14:56:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:22.846 14:56:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.846 14:56:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:22.846 Found net devices under 0000:31:00.0: cvl_0_0 00:19:22.846 14:56:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.846 14:56:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:22.846 14:56:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.846 14:56:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:22.846 14:56:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.846 14:56:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:22.846 Found net devices under 0000:31:00.1: cvl_0_1 00:19:22.846 14:56:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.846 14:56:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:22.846 14:56:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:22.846 14:56:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:22.846 14:56:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:22.846 14:56:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:22.846 14:56:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:22.846 14:56:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:22.846 14:56:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:22.846 14:56:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:22.846 14:56:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:22.846 14:56:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:22.846 14:56:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:22.846 14:56:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:22.846 14:56:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:22.846 14:56:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:22.846 14:56:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:22.846 14:56:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:22.846 14:56:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:22.846 14:56:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:22.846 14:56:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:22.846 14:56:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:22.846 14:56:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:23.107 14:56:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:23.107 14:56:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:23.107 14:56:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:23.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:23.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:19:23.107 00:19:23.107 --- 10.0.0.2 ping statistics --- 00:19:23.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.107 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:19:23.107 14:56:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:23.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:23.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:19:23.107 00:19:23.107 --- 10.0.0.1 ping statistics --- 00:19:23.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.107 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:19:23.107 14:56:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:23.107 14:56:05 -- nvmf/common.sh@411 -- # return 0 00:19:23.107 14:56:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:23.107 14:56:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:23.107 14:56:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:23.107 14:56:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:23.107 14:56:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:23.107 14:56:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:23.107 14:56:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:23.107 14:56:05 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:23.107 14:56:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:23.107 14:56:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:23.107 14:56:05 -- common/autotest_common.sh@10 -- # set +x 00:19:23.107 14:56:05 -- nvmf/common.sh@470 -- # nvmfpid=1098385 00:19:23.107 14:56:05 -- nvmf/common.sh@471 -- # waitforlisten 1098385 00:19:23.108 14:56:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:23.108 14:56:05 -- common/autotest_common.sh@817 -- # '[' -z 1098385 ']' 00:19:23.108 14:56:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.108 14:56:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:23.108 14:56:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.108 14:56:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:23.108 14:56:05 -- common/autotest_common.sh@10 -- # set +x 00:19:23.108 [2024-04-26 14:56:05.759423] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:23.108 [2024-04-26 14:56:05.759476] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.369 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.369 [2024-04-26 14:56:05.842120] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.369 [2024-04-26 14:56:05.912456] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.369 [2024-04-26 14:56:05.912510] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.369 [2024-04-26 14:56:05.912518] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.369 [2024-04-26 14:56:05.912524] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.369 [2024-04-26 14:56:05.912530] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.369 [2024-04-26 14:56:05.912557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.941 14:56:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:23.942 14:56:06 -- common/autotest_common.sh@850 -- # return 0 00:19:23.942 14:56:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:23.942 14:56:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:23.942 14:56:06 -- common/autotest_common.sh@10 -- # set +x 00:19:23.942 14:56:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:23.942 14:56:06 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:23.942 14:56:06 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:23.942 14:56:06 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:23.942 14:56:06 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:23.942 14:56:06 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:23.942 14:56:06 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:23.942 14:56:06 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:23.942 14:56:06 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:24.204 [2024-04-26 14:56:06.701610] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.204 [2024-04-26 14:56:06.717604] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:24.204 [2024-04-26 14:56:06.717912] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:24.204 [2024-04-26 14:56:06.747624] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:24.204 malloc0 00:19:24.204 14:56:06 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:24.204 14:56:06 -- fips/fips.sh@147 -- # bdevperf_pid=1098634 00:19:24.204 14:56:06 -- fips/fips.sh@148 -- # waitforlisten 1098634 /var/tmp/bdevperf.sock 00:19:24.204 14:56:06 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:24.204 14:56:06 -- common/autotest_common.sh@817 -- # '[' -z 1098634 ']' 00:19:24.204 14:56:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:24.204 14:56:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:24.204 14:56:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:24.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:24.204 14:56:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:24.204 14:56:06 -- common/autotest_common.sh@10 -- # set +x 00:19:24.204 [2024-04-26 14:56:06.840186] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:24.204 [2024-04-26 14:56:06.840262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1098634 ] 00:19:24.465 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.465 [2024-04-26 14:56:06.898593] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.465 [2024-04-26 14:56:06.960822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.035 14:56:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:25.035 14:56:07 -- common/autotest_common.sh@850 -- # return 0 00:19:25.036 14:56:07 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:25.295 [2024-04-26 14:56:07.744608] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:25.295 [2024-04-26 14:56:07.744676] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:25.295 TLSTESTn1 00:19:25.295 14:56:07 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:25.295 Running I/O for 10 seconds... 00:19:35.369 00:19:35.369 Latency(us) 00:19:35.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.369 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:35.369 Verification LBA range: start 0x0 length 0x2000 00:19:35.369 TLSTESTn1 : 10.02 5876.79 22.96 0.00 0.00 21746.79 4587.52 49807.36 00:19:35.369 =================================================================================================================== 00:19:35.369 Total : 5876.79 22.96 0.00 0.00 21746.79 4587.52 49807.36 00:19:35.369 0 00:19:35.369 14:56:17 -- fips/fips.sh@1 -- # cleanup 00:19:35.369 14:56:17 -- fips/fips.sh@15 -- # process_shm --id 0 00:19:35.369 14:56:17 -- common/autotest_common.sh@794 -- # type=--id 00:19:35.369 14:56:17 -- common/autotest_common.sh@795 -- # id=0 00:19:35.369 14:56:17 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:19:35.369 14:56:17 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:35.369 14:56:17 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:19:35.369 14:56:17 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:19:35.369 14:56:17 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:19:35.369 14:56:17 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:35.369 nvmf_trace.0 00:19:35.631 14:56:18 -- common/autotest_common.sh@809 -- # return 0 00:19:35.631 14:56:18 -- fips/fips.sh@16 -- # killprocess 1098634 00:19:35.631 14:56:18 -- common/autotest_common.sh@936 -- # '[' -z 1098634 ']' 00:19:35.631 14:56:18 -- common/autotest_common.sh@940 -- # kill -0 1098634 00:19:35.631 14:56:18 -- common/autotest_common.sh@941 -- # uname 00:19:35.631 14:56:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:35.631 14:56:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1098634 00:19:35.631 14:56:18 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:35.631 14:56:18 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:35.631 14:56:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1098634' 00:19:35.631 killing process with pid 1098634 00:19:35.631 14:56:18 -- common/autotest_common.sh@955 -- # kill 1098634 00:19:35.631 Received shutdown signal, test time was about 10.000000 seconds 00:19:35.631 00:19:35.631 Latency(us) 00:19:35.631 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.631 =================================================================================================================== 00:19:35.631 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:35.631 [2024-04-26 14:56:18.116190] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:35.631 14:56:18 -- common/autotest_common.sh@960 -- # wait 1098634 00:19:35.631 14:56:18 -- fips/fips.sh@17 -- # nvmftestfini 00:19:35.631 14:56:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:35.631 14:56:18 -- nvmf/common.sh@117 -- # sync 00:19:35.631 14:56:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:35.631 14:56:18 -- nvmf/common.sh@120 -- # set +e 00:19:35.631 14:56:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:35.631 14:56:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:35.631 rmmod nvme_tcp 00:19:35.631 rmmod nvme_fabrics 00:19:35.631 rmmod nvme_keyring 00:19:35.631 14:56:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:35.631 14:56:18 -- nvmf/common.sh@124 -- # set -e 00:19:35.631 14:56:18 -- nvmf/common.sh@125 -- # return 0 00:19:35.631 14:56:18 -- nvmf/common.sh@478 -- # '[' -n 1098385 ']' 00:19:35.631 14:56:18 -- nvmf/common.sh@479 -- # killprocess 1098385 00:19:35.631 14:56:18 -- common/autotest_common.sh@936 -- # '[' -z 1098385 ']' 00:19:35.631 14:56:18 -- common/autotest_common.sh@940 -- # kill -0 1098385 00:19:35.631 14:56:18 -- common/autotest_common.sh@941 -- # uname 00:19:35.631 14:56:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:35.631 14:56:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1098385 00:19:35.891 14:56:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:35.892 14:56:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:35.892 14:56:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1098385' 00:19:35.892 killing process with pid 1098385 00:19:35.892 14:56:18 -- common/autotest_common.sh@955 -- # kill 1098385 00:19:35.892 [2024-04-26 14:56:18.341543] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:35.892 14:56:18 -- common/autotest_common.sh@960 -- # wait 1098385 00:19:35.892 14:56:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:35.892 14:56:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:35.892 14:56:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:35.892 14:56:18 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:35.892 14:56:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:35.892 14:56:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.892 14:56:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.892 14:56:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.439 14:56:20 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:38.439 14:56:20 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:38.439 00:19:38.439 real 0m22.050s 00:19:38.439 user 0m23.879s 00:19:38.439 sys 0m8.764s 00:19:38.439 14:56:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:38.439 14:56:20 -- common/autotest_common.sh@10 -- # set +x 00:19:38.439 ************************************ 00:19:38.439 END TEST nvmf_fips 00:19:38.439 ************************************ 00:19:38.439 14:56:20 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:19:38.439 14:56:20 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:19:38.439 14:56:20 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:19:38.439 14:56:20 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:19:38.439 14:56:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:38.439 14:56:20 -- common/autotest_common.sh@10 -- # set +x 00:19:45.024 14:56:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:45.024 14:56:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:45.024 14:56:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:45.024 14:56:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:45.024 14:56:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:45.024 14:56:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:45.024 14:56:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:45.024 14:56:27 -- nvmf/common.sh@295 -- # net_devs=() 00:19:45.024 14:56:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:45.024 14:56:27 -- nvmf/common.sh@296 -- # e810=() 00:19:45.024 14:56:27 -- nvmf/common.sh@296 -- # local -ga e810 00:19:45.024 14:56:27 -- nvmf/common.sh@297 -- # x722=() 00:19:45.024 14:56:27 -- nvmf/common.sh@297 -- # local -ga x722 00:19:45.024 14:56:27 -- nvmf/common.sh@298 -- # mlx=() 00:19:45.024 14:56:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:45.024 14:56:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:45.024 14:56:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:45.024 14:56:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:45.024 14:56:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:45.024 14:56:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:45.024 14:56:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:45.024 14:56:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:45.024 14:56:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:45.024 14:56:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:45.024 14:56:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:45.024 14:56:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:45.024 14:56:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:45.024 14:56:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:45.024 14:56:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:45.024 14:56:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:45.024 14:56:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:45.024 14:56:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:45.024 14:56:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:45.024 14:56:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:45.024 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:45.024 14:56:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:45.024 14:56:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:45.024 14:56:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.024 14:56:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.024 14:56:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:45.024 14:56:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:45.024 14:56:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:45.024 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:45.024 14:56:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:45.024 14:56:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:45.024 14:56:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.024 14:56:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.024 14:56:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:45.024 14:56:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:45.024 14:56:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:45.024 14:56:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:45.024 14:56:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:45.024 14:56:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.024 14:56:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:45.024 14:56:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.024 14:56:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:45.024 Found net devices under 0000:31:00.0: cvl_0_0 00:19:45.024 14:56:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.024 14:56:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:45.024 14:56:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.024 14:56:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:45.024 14:56:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.024 14:56:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:45.024 Found net devices under 0000:31:00.1: cvl_0_1 00:19:45.024 14:56:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.024 14:56:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:45.024 14:56:27 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:45.024 14:56:27 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:19:45.024 14:56:27 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:45.024 14:56:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:45.024 14:56:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:45.024 14:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:45.024 ************************************ 00:19:45.024 START TEST nvmf_perf_adq 00:19:45.024 ************************************ 00:19:45.024 14:56:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:45.286 * Looking for test storage... 00:19:45.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:45.286 14:56:27 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:45.286 14:56:27 -- nvmf/common.sh@7 -- # uname -s 00:19:45.286 14:56:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.286 14:56:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.286 14:56:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.286 14:56:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.286 14:56:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.286 14:56:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.286 14:56:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.286 14:56:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.286 14:56:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.286 14:56:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.286 14:56:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:45.286 14:56:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:45.286 14:56:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.286 14:56:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.286 14:56:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:45.286 14:56:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.286 14:56:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:45.286 14:56:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.286 14:56:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.287 14:56:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.287 14:56:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.287 14:56:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.287 14:56:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.287 14:56:27 -- paths/export.sh@5 -- # export PATH 00:19:45.287 14:56:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.287 14:56:27 -- nvmf/common.sh@47 -- # : 0 00:19:45.287 14:56:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:45.287 14:56:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:45.287 14:56:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:45.287 14:56:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.287 14:56:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.287 14:56:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:45.287 14:56:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:45.287 14:56:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:45.287 14:56:27 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:45.287 14:56:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:45.287 14:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:53.429 14:56:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:53.430 14:56:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:53.430 14:56:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:53.430 14:56:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:53.430 14:56:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:53.430 14:56:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:53.430 14:56:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:53.430 14:56:34 -- nvmf/common.sh@295 -- # net_devs=() 00:19:53.430 14:56:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:53.430 14:56:34 -- nvmf/common.sh@296 -- # e810=() 00:19:53.430 14:56:34 -- nvmf/common.sh@296 -- # local -ga e810 00:19:53.430 14:56:34 -- nvmf/common.sh@297 -- # x722=() 00:19:53.430 14:56:34 -- nvmf/common.sh@297 -- # local -ga x722 00:19:53.430 14:56:34 -- nvmf/common.sh@298 -- # mlx=() 00:19:53.430 14:56:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:53.430 14:56:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:53.430 14:56:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:53.430 14:56:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:53.430 14:56:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:53.430 14:56:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:53.430 14:56:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:53.430 14:56:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:53.430 14:56:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:53.430 14:56:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:53.430 14:56:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:53.430 14:56:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:53.430 14:56:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:53.430 14:56:34 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:53.430 14:56:34 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:53.430 14:56:34 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:53.430 14:56:34 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:53.430 14:56:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:53.430 14:56:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:53.430 14:56:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:53.430 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:53.430 14:56:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:53.430 14:56:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:53.430 14:56:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.430 14:56:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.430 14:56:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:53.430 14:56:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:53.430 14:56:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:53.430 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:53.430 14:56:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:53.430 14:56:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:53.430 14:56:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.430 14:56:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.430 14:56:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:53.430 14:56:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:53.430 14:56:34 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:53.430 14:56:34 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:53.430 14:56:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:53.430 14:56:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.430 14:56:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:53.430 14:56:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.430 14:56:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:53.430 Found net devices under 0000:31:00.0: cvl_0_0 00:19:53.430 14:56:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.430 14:56:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:53.430 14:56:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.430 14:56:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:53.430 14:56:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.430 14:56:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:53.430 Found net devices under 0000:31:00.1: cvl_0_1 00:19:53.430 14:56:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.430 14:56:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:53.430 14:56:34 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:53.430 14:56:34 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:53.430 14:56:34 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:53.430 14:56:34 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:19:53.430 14:56:34 -- target/perf_adq.sh@52 -- # rmmod ice 00:19:53.690 14:56:36 -- target/perf_adq.sh@53 -- # modprobe ice 00:19:55.603 14:56:38 -- target/perf_adq.sh@54 -- # sleep 5 00:20:00.893 14:56:43 -- target/perf_adq.sh@67 -- # nvmftestinit 00:20:00.893 14:56:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:00.893 14:56:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.893 14:56:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:00.893 14:56:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:00.893 14:56:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:00.893 14:56:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.893 14:56:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.893 14:56:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.893 14:56:43 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:00.893 14:56:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:00.893 14:56:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:00.893 14:56:43 -- common/autotest_common.sh@10 -- # set +x 00:20:00.893 14:56:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:00.893 14:56:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:00.893 14:56:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:00.893 14:56:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:00.893 14:56:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:00.893 14:56:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:00.893 14:56:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:00.893 14:56:43 -- nvmf/common.sh@295 -- # net_devs=() 00:20:00.893 14:56:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:00.893 14:56:43 -- nvmf/common.sh@296 -- # e810=() 00:20:00.893 14:56:43 -- nvmf/common.sh@296 -- # local -ga e810 00:20:00.893 14:56:43 -- nvmf/common.sh@297 -- # x722=() 00:20:00.893 14:56:43 -- nvmf/common.sh@297 -- # local -ga x722 00:20:00.893 14:56:43 -- nvmf/common.sh@298 -- # mlx=() 00:20:00.893 14:56:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:00.893 14:56:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:00.893 14:56:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:00.893 14:56:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:00.893 14:56:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:00.893 14:56:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:00.893 14:56:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:00.893 14:56:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:00.893 14:56:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:00.893 14:56:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:00.893 14:56:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:00.893 14:56:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:00.893 14:56:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:00.893 14:56:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:00.893 14:56:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:00.893 14:56:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:00.893 14:56:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:00.893 14:56:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:00.893 14:56:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:00.893 14:56:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:00.893 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:00.893 14:56:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:00.893 14:56:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:00.893 14:56:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.893 14:56:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.893 14:56:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:00.893 14:56:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:00.893 14:56:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:00.893 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:00.893 14:56:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:00.893 14:56:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:00.893 14:56:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.893 14:56:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.893 14:56:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:00.893 14:56:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:00.893 14:56:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:00.893 14:56:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:00.893 14:56:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:00.893 14:56:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.893 14:56:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:00.893 14:56:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.893 14:56:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:00.893 Found net devices under 0000:31:00.0: cvl_0_0 00:20:00.893 14:56:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.893 14:56:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:00.893 14:56:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.893 14:56:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:00.893 14:56:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.893 14:56:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:00.893 Found net devices under 0000:31:00.1: cvl_0_1 00:20:00.893 14:56:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.893 14:56:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:00.893 14:56:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:00.893 14:56:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:00.893 14:56:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:00.893 14:56:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:00.893 14:56:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.893 14:56:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:00.893 14:56:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:00.893 14:56:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:00.893 14:56:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:00.893 14:56:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:00.893 14:56:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:00.893 14:56:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:00.893 14:56:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.893 14:56:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:00.893 14:56:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:00.893 14:56:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:00.893 14:56:43 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:00.893 14:56:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:00.893 14:56:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:00.893 14:56:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:00.893 14:56:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:00.893 14:56:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:00.893 14:56:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:00.893 14:56:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:00.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:20:00.893 00:20:00.893 --- 10.0.0.2 ping statistics --- 00:20:00.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.893 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:20:00.893 14:56:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:00.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:20:00.893 00:20:00.893 --- 10.0.0.1 ping statistics --- 00:20:00.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.893 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:20:00.893 14:56:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.893 14:56:43 -- nvmf/common.sh@411 -- # return 0 00:20:00.893 14:56:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:00.893 14:56:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.893 14:56:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:00.893 14:56:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:00.893 14:56:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.893 14:56:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:00.893 14:56:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:01.154 14:56:43 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:01.154 14:56:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:01.154 14:56:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:01.154 14:56:43 -- common/autotest_common.sh@10 -- # set +x 00:20:01.155 14:56:43 -- nvmf/common.sh@470 -- # nvmfpid=1110613 00:20:01.155 14:56:43 -- nvmf/common.sh@471 -- # waitforlisten 1110613 00:20:01.155 14:56:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:01.155 14:56:43 -- common/autotest_common.sh@817 -- # '[' -z 1110613 ']' 00:20:01.155 14:56:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.155 14:56:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:01.155 14:56:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.155 14:56:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:01.155 14:56:43 -- common/autotest_common.sh@10 -- # set +x 00:20:01.155 [2024-04-26 14:56:43.633821] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:01.155 [2024-04-26 14:56:43.633899] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.155 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.155 [2024-04-26 14:56:43.706280] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:01.155 [2024-04-26 14:56:43.780881] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.155 [2024-04-26 14:56:43.780921] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.155 [2024-04-26 14:56:43.780930] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.155 [2024-04-26 14:56:43.780938] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.155 [2024-04-26 14:56:43.780945] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.155 [2024-04-26 14:56:43.781157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.155 [2024-04-26 14:56:43.781272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.155 [2024-04-26 14:56:43.781430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.155 [2024-04-26 14:56:43.781431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:02.097 14:56:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:02.097 14:56:44 -- common/autotest_common.sh@850 -- # return 0 00:20:02.097 14:56:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:02.097 14:56:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:02.097 14:56:44 -- common/autotest_common.sh@10 -- # set +x 00:20:02.097 14:56:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.097 14:56:44 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:20:02.097 14:56:44 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:02.097 14:56:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.097 14:56:44 -- common/autotest_common.sh@10 -- # set +x 00:20:02.097 14:56:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.097 14:56:44 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:20:02.097 14:56:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.097 14:56:44 -- common/autotest_common.sh@10 -- # set +x 00:20:02.097 14:56:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.097 14:56:44 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:02.097 14:56:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.097 14:56:44 -- common/autotest_common.sh@10 -- # set +x 00:20:02.097 [2024-04-26 14:56:44.552769] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.097 14:56:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.097 14:56:44 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:02.097 14:56:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.097 14:56:44 -- common/autotest_common.sh@10 -- # set +x 00:20:02.097 Malloc1 00:20:02.097 14:56:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.097 14:56:44 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:02.097 14:56:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.097 14:56:44 -- common/autotest_common.sh@10 -- # set +x 00:20:02.097 14:56:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.097 14:56:44 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:02.097 14:56:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.097 14:56:44 -- common/autotest_common.sh@10 -- # set +x 00:20:02.097 14:56:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.097 14:56:44 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:02.097 14:56:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.097 14:56:44 -- common/autotest_common.sh@10 -- # set +x 00:20:02.097 [2024-04-26 14:56:44.612160] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.097 14:56:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.097 14:56:44 -- target/perf_adq.sh@73 -- # perfpid=1110797 00:20:02.097 14:56:44 -- target/perf_adq.sh@74 -- # sleep 2 00:20:02.097 14:56:44 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:02.097 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.010 14:56:46 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:20:04.010 14:56:46 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:04.010 14:56:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.010 14:56:46 -- target/perf_adq.sh@76 -- # wc -l 00:20:04.010 14:56:46 -- common/autotest_common.sh@10 -- # set +x 00:20:04.010 14:56:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.010 14:56:46 -- target/perf_adq.sh@76 -- # count=4 00:20:04.010 14:56:46 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:20:04.010 14:56:46 -- target/perf_adq.sh@81 -- # wait 1110797 00:20:12.154 Initializing NVMe Controllers 00:20:12.154 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:12.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:12.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:12.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:12.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:12.154 Initialization complete. Launching workers. 00:20:12.154 ======================================================== 00:20:12.154 Latency(us) 00:20:12.154 Device Information : IOPS MiB/s Average min max 00:20:12.154 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10735.50 41.94 5961.56 1416.12 9679.01 00:20:12.154 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14410.10 56.29 4445.49 1415.88 42245.08 00:20:12.154 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13288.90 51.91 4816.83 1351.57 11219.41 00:20:12.154 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13023.60 50.87 4914.08 1267.16 11463.36 00:20:12.154 ======================================================== 00:20:12.154 Total : 51458.10 201.01 4976.28 1267.16 42245.08 00:20:12.154 00:20:12.154 14:56:54 -- target/perf_adq.sh@82 -- # nvmftestfini 00:20:12.154 14:56:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:12.154 14:56:54 -- nvmf/common.sh@117 -- # sync 00:20:12.154 14:56:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:12.416 14:56:54 -- nvmf/common.sh@120 -- # set +e 00:20:12.416 14:56:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:12.416 14:56:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:12.416 rmmod nvme_tcp 00:20:12.416 rmmod nvme_fabrics 00:20:12.416 rmmod nvme_keyring 00:20:12.416 14:56:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:12.416 14:56:54 -- nvmf/common.sh@124 -- # set -e 00:20:12.416 14:56:54 -- nvmf/common.sh@125 -- # return 0 00:20:12.416 14:56:54 -- nvmf/common.sh@478 -- # '[' -n 1110613 ']' 00:20:12.416 14:56:54 -- nvmf/common.sh@479 -- # killprocess 1110613 00:20:12.416 14:56:54 -- common/autotest_common.sh@936 -- # '[' -z 1110613 ']' 00:20:12.416 14:56:54 -- common/autotest_common.sh@940 -- # kill -0 1110613 00:20:12.416 14:56:54 -- common/autotest_common.sh@941 -- # uname 00:20:12.416 14:56:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:12.416 14:56:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1110613 00:20:12.416 14:56:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:12.416 14:56:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:12.416 14:56:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1110613' 00:20:12.416 killing process with pid 1110613 00:20:12.416 14:56:54 -- common/autotest_common.sh@955 -- # kill 1110613 00:20:12.416 14:56:54 -- common/autotest_common.sh@960 -- # wait 1110613 00:20:12.416 14:56:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:12.416 14:56:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:12.416 14:56:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:12.416 14:56:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:12.416 14:56:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:12.416 14:56:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.416 14:56:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.416 14:56:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.964 14:56:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:14.964 14:56:57 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:20:14.964 14:56:57 -- target/perf_adq.sh@52 -- # rmmod ice 00:20:16.353 14:56:58 -- target/perf_adq.sh@53 -- # modprobe ice 00:20:18.899 14:57:00 -- target/perf_adq.sh@54 -- # sleep 5 00:20:24.218 14:57:05 -- target/perf_adq.sh@87 -- # nvmftestinit 00:20:24.218 14:57:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:24.218 14:57:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:24.218 14:57:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:24.218 14:57:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:24.218 14:57:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:24.218 14:57:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.218 14:57:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.218 14:57:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.218 14:57:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:24.218 14:57:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:24.218 14:57:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:24.218 14:57:05 -- common/autotest_common.sh@10 -- # set +x 00:20:24.218 14:57:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:24.218 14:57:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:24.218 14:57:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:24.218 14:57:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:24.218 14:57:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:24.218 14:57:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:24.218 14:57:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:24.218 14:57:05 -- nvmf/common.sh@295 -- # net_devs=() 00:20:24.218 14:57:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:24.218 14:57:05 -- nvmf/common.sh@296 -- # e810=() 00:20:24.218 14:57:05 -- nvmf/common.sh@296 -- # local -ga e810 00:20:24.218 14:57:05 -- nvmf/common.sh@297 -- # x722=() 00:20:24.218 14:57:05 -- nvmf/common.sh@297 -- # local -ga x722 00:20:24.218 14:57:05 -- nvmf/common.sh@298 -- # mlx=() 00:20:24.218 14:57:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:24.218 14:57:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:24.218 14:57:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:24.218 14:57:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:24.218 14:57:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:24.218 14:57:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:24.218 14:57:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:24.218 14:57:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:24.218 14:57:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:24.218 14:57:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:24.218 14:57:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:24.218 14:57:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:24.218 14:57:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:24.218 14:57:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:24.218 14:57:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:24.218 14:57:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:24.218 14:57:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:24.218 14:57:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:24.218 14:57:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:24.218 14:57:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:24.218 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:24.218 14:57:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:24.218 14:57:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:24.218 14:57:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.218 14:57:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.218 14:57:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:24.218 14:57:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:24.218 14:57:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:24.218 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:24.218 14:57:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:24.218 14:57:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:24.218 14:57:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.218 14:57:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.218 14:57:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:24.218 14:57:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:24.218 14:57:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:24.218 14:57:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:24.218 14:57:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:24.218 14:57:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.219 14:57:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:24.219 14:57:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.219 14:57:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:24.219 Found net devices under 0000:31:00.0: cvl_0_0 00:20:24.219 14:57:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.219 14:57:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:24.219 14:57:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.219 14:57:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:24.219 14:57:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.219 14:57:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:24.219 Found net devices under 0000:31:00.1: cvl_0_1 00:20:24.219 14:57:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.219 14:57:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:24.219 14:57:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:24.219 14:57:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:24.219 14:57:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:24.219 14:57:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:24.219 14:57:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:24.219 14:57:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:24.219 14:57:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:24.219 14:57:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:24.219 14:57:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:24.219 14:57:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:24.219 14:57:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:24.219 14:57:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:24.219 14:57:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:24.219 14:57:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:24.219 14:57:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:24.219 14:57:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:24.219 14:57:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:24.219 14:57:06 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:24.219 14:57:06 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:24.219 14:57:06 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:24.219 14:57:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:24.219 14:57:06 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:24.219 14:57:06 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:24.219 14:57:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:24.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:24.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:20:24.219 00:20:24.219 --- 10.0.0.2 ping statistics --- 00:20:24.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.219 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:20:24.219 14:57:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:24.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:24.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:20:24.219 00:20:24.219 --- 10.0.0.1 ping statistics --- 00:20:24.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.219 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:20:24.219 14:57:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:24.219 14:57:06 -- nvmf/common.sh@411 -- # return 0 00:20:24.219 14:57:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:24.219 14:57:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:24.219 14:57:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:24.219 14:57:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:24.219 14:57:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:24.219 14:57:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:24.219 14:57:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:24.219 14:57:06 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:20:24.219 14:57:06 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:24.219 14:57:06 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:24.219 14:57:06 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:24.219 net.core.busy_poll = 1 00:20:24.219 14:57:06 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:24.219 net.core.busy_read = 1 00:20:24.219 14:57:06 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:24.219 14:57:06 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:24.219 14:57:06 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:24.219 14:57:06 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:24.219 14:57:06 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:24.219 14:57:06 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:24.219 14:57:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:24.219 14:57:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:24.219 14:57:06 -- common/autotest_common.sh@10 -- # set +x 00:20:24.219 14:57:06 -- nvmf/common.sh@470 -- # nvmfpid=1115688 00:20:24.219 14:57:06 -- nvmf/common.sh@471 -- # waitforlisten 1115688 00:20:24.219 14:57:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:24.219 14:57:06 -- common/autotest_common.sh@817 -- # '[' -z 1115688 ']' 00:20:24.219 14:57:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.219 14:57:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:24.219 14:57:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.219 14:57:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:24.219 14:57:06 -- common/autotest_common.sh@10 -- # set +x 00:20:24.219 [2024-04-26 14:57:06.645707] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:24.219 [2024-04-26 14:57:06.645756] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.219 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.219 [2024-04-26 14:57:06.711416] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:24.219 [2024-04-26 14:57:06.774915] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.219 [2024-04-26 14:57:06.774956] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.219 [2024-04-26 14:57:06.774965] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.219 [2024-04-26 14:57:06.774973] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.219 [2024-04-26 14:57:06.774980] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.219 [2024-04-26 14:57:06.775130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.219 [2024-04-26 14:57:06.775138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.219 [2024-04-26 14:57:06.775279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.219 [2024-04-26 14:57:06.775281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:24.868 14:57:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:24.868 14:57:07 -- common/autotest_common.sh@850 -- # return 0 00:20:24.868 14:57:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:24.868 14:57:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:24.868 14:57:07 -- common/autotest_common.sh@10 -- # set +x 00:20:24.868 14:57:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.868 14:57:07 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:20:24.868 14:57:07 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:24.868 14:57:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.868 14:57:07 -- common/autotest_common.sh@10 -- # set +x 00:20:24.868 14:57:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.868 14:57:07 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:20:24.868 14:57:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.868 14:57:07 -- common/autotest_common.sh@10 -- # set +x 00:20:24.868 14:57:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.868 14:57:07 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:24.868 14:57:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.868 14:57:07 -- common/autotest_common.sh@10 -- # set +x 00:20:25.130 [2024-04-26 14:57:07.536786] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.130 14:57:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.130 14:57:07 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:25.130 14:57:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.130 14:57:07 -- common/autotest_common.sh@10 -- # set +x 00:20:25.130 Malloc1 00:20:25.130 14:57:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.130 14:57:07 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:25.130 14:57:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.130 14:57:07 -- common/autotest_common.sh@10 -- # set +x 00:20:25.130 14:57:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.130 14:57:07 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:25.130 14:57:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.130 14:57:07 -- common/autotest_common.sh@10 -- # set +x 00:20:25.130 14:57:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.130 14:57:07 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:25.130 14:57:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.130 14:57:07 -- common/autotest_common.sh@10 -- # set +x 00:20:25.130 [2024-04-26 14:57:07.592181] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.130 14:57:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.130 14:57:07 -- target/perf_adq.sh@94 -- # perfpid=1115734 00:20:25.130 14:57:07 -- target/perf_adq.sh@95 -- # sleep 2 00:20:25.130 14:57:07 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:25.130 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.044 14:57:09 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:20:27.044 14:57:09 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:27.044 14:57:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.044 14:57:09 -- common/autotest_common.sh@10 -- # set +x 00:20:27.044 14:57:09 -- target/perf_adq.sh@97 -- # wc -l 00:20:27.044 14:57:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.044 14:57:09 -- target/perf_adq.sh@97 -- # count=2 00:20:27.044 14:57:09 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:20:27.044 14:57:09 -- target/perf_adq.sh@103 -- # wait 1115734 00:20:35.177 Initializing NVMe Controllers 00:20:35.177 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:35.177 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:35.177 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:35.177 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:35.177 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:35.177 Initialization complete. Launching workers. 00:20:35.177 ======================================================== 00:20:35.177 Latency(us) 00:20:35.177 Device Information : IOPS MiB/s Average min max 00:20:35.177 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9362.38 36.57 6835.51 1030.12 53026.74 00:20:35.177 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10281.07 40.16 6243.47 834.93 50390.13 00:20:35.177 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9563.08 37.36 6719.07 1170.88 54483.68 00:20:35.177 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10219.67 39.92 6263.56 1154.89 50892.91 00:20:35.177 ======================================================== 00:20:35.177 Total : 39426.21 154.01 6504.63 834.93 54483.68 00:20:35.177 00:20:35.177 14:57:17 -- target/perf_adq.sh@104 -- # nvmftestfini 00:20:35.177 14:57:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:35.177 14:57:17 -- nvmf/common.sh@117 -- # sync 00:20:35.177 14:57:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:35.177 14:57:17 -- nvmf/common.sh@120 -- # set +e 00:20:35.178 14:57:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:35.178 14:57:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:35.178 rmmod nvme_tcp 00:20:35.178 rmmod nvme_fabrics 00:20:35.438 rmmod nvme_keyring 00:20:35.438 14:57:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:35.438 14:57:17 -- nvmf/common.sh@124 -- # set -e 00:20:35.438 14:57:17 -- nvmf/common.sh@125 -- # return 0 00:20:35.438 14:57:17 -- nvmf/common.sh@478 -- # '[' -n 1115688 ']' 00:20:35.438 14:57:17 -- nvmf/common.sh@479 -- # killprocess 1115688 00:20:35.438 14:57:17 -- common/autotest_common.sh@936 -- # '[' -z 1115688 ']' 00:20:35.438 14:57:17 -- common/autotest_common.sh@940 -- # kill -0 1115688 00:20:35.438 14:57:17 -- common/autotest_common.sh@941 -- # uname 00:20:35.438 14:57:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:35.438 14:57:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1115688 00:20:35.438 14:57:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:35.438 14:57:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:35.438 14:57:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1115688' 00:20:35.438 killing process with pid 1115688 00:20:35.438 14:57:17 -- common/autotest_common.sh@955 -- # kill 1115688 00:20:35.438 14:57:17 -- common/autotest_common.sh@960 -- # wait 1115688 00:20:35.438 14:57:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:35.438 14:57:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:35.438 14:57:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:35.438 14:57:18 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:35.438 14:57:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:35.438 14:57:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.438 14:57:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:35.438 14:57:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.735 14:57:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:38.735 14:57:21 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:20:38.735 00:20:38.735 real 0m53.543s 00:20:38.735 user 2m49.409s 00:20:38.735 sys 0m10.513s 00:20:38.735 14:57:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:38.735 14:57:21 -- common/autotest_common.sh@10 -- # set +x 00:20:38.735 ************************************ 00:20:38.735 END TEST nvmf_perf_adq 00:20:38.735 ************************************ 00:20:38.735 14:57:21 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:38.735 14:57:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:38.735 14:57:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:38.735 14:57:21 -- common/autotest_common.sh@10 -- # set +x 00:20:38.735 ************************************ 00:20:38.735 START TEST nvmf_shutdown 00:20:38.735 ************************************ 00:20:38.736 14:57:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:38.996 * Looking for test storage... 00:20:38.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:38.996 14:57:21 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:38.996 14:57:21 -- nvmf/common.sh@7 -- # uname -s 00:20:38.996 14:57:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:38.996 14:57:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:38.996 14:57:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:38.996 14:57:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:38.996 14:57:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:38.996 14:57:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:38.996 14:57:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:38.996 14:57:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:38.996 14:57:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:38.996 14:57:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:38.996 14:57:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:38.996 14:57:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:38.996 14:57:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:38.996 14:57:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:38.996 14:57:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:38.996 14:57:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:38.996 14:57:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:38.996 14:57:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:38.996 14:57:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:38.996 14:57:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:38.996 14:57:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.996 14:57:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.996 14:57:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.996 14:57:21 -- paths/export.sh@5 -- # export PATH 00:20:38.996 14:57:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.996 14:57:21 -- nvmf/common.sh@47 -- # : 0 00:20:38.996 14:57:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:38.996 14:57:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:38.996 14:57:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:38.996 14:57:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:38.996 14:57:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:38.996 14:57:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:38.996 14:57:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:38.996 14:57:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:38.996 14:57:21 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:38.996 14:57:21 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:38.996 14:57:21 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:38.996 14:57:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:38.996 14:57:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:38.996 14:57:21 -- common/autotest_common.sh@10 -- # set +x 00:20:38.996 ************************************ 00:20:38.996 START TEST nvmf_shutdown_tc1 00:20:38.996 ************************************ 00:20:38.996 14:57:21 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:20:38.996 14:57:21 -- target/shutdown.sh@74 -- # starttarget 00:20:38.996 14:57:21 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:38.996 14:57:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:38.996 14:57:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:38.996 14:57:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:38.996 14:57:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:38.996 14:57:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:38.996 14:57:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.996 14:57:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:38.996 14:57:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.996 14:57:21 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:39.256 14:57:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:39.256 14:57:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:39.256 14:57:21 -- common/autotest_common.sh@10 -- # set +x 00:20:45.838 14:57:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:45.838 14:57:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:45.838 14:57:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:45.838 14:57:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:45.838 14:57:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:45.838 14:57:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:45.838 14:57:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:45.838 14:57:28 -- nvmf/common.sh@295 -- # net_devs=() 00:20:45.838 14:57:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:45.838 14:57:28 -- nvmf/common.sh@296 -- # e810=() 00:20:45.838 14:57:28 -- nvmf/common.sh@296 -- # local -ga e810 00:20:45.838 14:57:28 -- nvmf/common.sh@297 -- # x722=() 00:20:45.838 14:57:28 -- nvmf/common.sh@297 -- # local -ga x722 00:20:45.838 14:57:28 -- nvmf/common.sh@298 -- # mlx=() 00:20:45.838 14:57:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:45.838 14:57:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:45.838 14:57:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:45.838 14:57:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:45.838 14:57:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:45.838 14:57:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:45.838 14:57:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:45.838 14:57:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:45.838 14:57:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:45.838 14:57:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:45.838 14:57:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:45.838 14:57:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:45.838 14:57:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:45.838 14:57:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:45.838 14:57:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:45.838 14:57:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:45.838 14:57:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:45.838 14:57:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:45.838 14:57:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:45.838 14:57:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:45.838 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:45.838 14:57:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:45.838 14:57:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:45.838 14:57:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.838 14:57:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.838 14:57:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:45.838 14:57:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:45.838 14:57:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:45.838 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:45.838 14:57:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:45.838 14:57:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:45.838 14:57:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.838 14:57:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.838 14:57:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:45.838 14:57:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:45.838 14:57:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:45.838 14:57:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:45.838 14:57:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:45.838 14:57:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.838 14:57:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:45.838 14:57:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.838 14:57:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:45.838 Found net devices under 0000:31:00.0: cvl_0_0 00:20:45.838 14:57:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.838 14:57:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:45.838 14:57:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.838 14:57:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:45.838 14:57:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.838 14:57:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:45.838 Found net devices under 0000:31:00.1: cvl_0_1 00:20:45.838 14:57:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.838 14:57:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:45.838 14:57:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:45.838 14:57:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:45.838 14:57:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:45.838 14:57:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:45.838 14:57:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:45.838 14:57:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:45.838 14:57:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:45.838 14:57:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:45.838 14:57:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:45.838 14:57:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:45.838 14:57:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:45.838 14:57:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:45.838 14:57:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:45.838 14:57:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:45.838 14:57:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:45.838 14:57:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:45.838 14:57:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:45.838 14:57:28 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:45.838 14:57:28 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:45.838 14:57:28 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:45.838 14:57:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:45.838 14:57:28 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:45.838 14:57:28 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:46.098 14:57:28 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:46.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:46.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.497 ms 00:20:46.098 00:20:46.098 --- 10.0.0.2 ping statistics --- 00:20:46.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.099 rtt min/avg/max/mdev = 0.497/0.497/0.497/0.000 ms 00:20:46.099 14:57:28 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:46.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:46.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:20:46.099 00:20:46.099 --- 10.0.0.1 ping statistics --- 00:20:46.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.099 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:20:46.099 14:57:28 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:46.099 14:57:28 -- nvmf/common.sh@411 -- # return 0 00:20:46.099 14:57:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:46.099 14:57:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:46.099 14:57:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:46.099 14:57:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:46.099 14:57:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:46.099 14:57:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:46.099 14:57:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:46.099 14:57:28 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:46.099 14:57:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:46.099 14:57:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:46.099 14:57:28 -- common/autotest_common.sh@10 -- # set +x 00:20:46.099 14:57:28 -- nvmf/common.sh@470 -- # nvmfpid=1122722 00:20:46.099 14:57:28 -- nvmf/common.sh@471 -- # waitforlisten 1122722 00:20:46.099 14:57:28 -- common/autotest_common.sh@817 -- # '[' -z 1122722 ']' 00:20:46.099 14:57:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.099 14:57:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:46.099 14:57:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.099 14:57:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:46.099 14:57:28 -- common/autotest_common.sh@10 -- # set +x 00:20:46.099 14:57:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:46.099 [2024-04-26 14:57:28.612819] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:46.099 [2024-04-26 14:57:28.612897] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.099 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.099 [2024-04-26 14:57:28.701107] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:46.359 [2024-04-26 14:57:28.793532] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.359 [2024-04-26 14:57:28.793596] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.359 [2024-04-26 14:57:28.793604] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.359 [2024-04-26 14:57:28.793611] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.359 [2024-04-26 14:57:28.793618] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.359 [2024-04-26 14:57:28.793754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.359 [2024-04-26 14:57:28.793918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:46.359 [2024-04-26 14:57:28.794089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.359 [2024-04-26 14:57:28.794090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:46.928 14:57:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:46.928 14:57:29 -- common/autotest_common.sh@850 -- # return 0 00:20:46.928 14:57:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:46.928 14:57:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:46.928 14:57:29 -- common/autotest_common.sh@10 -- # set +x 00:20:46.928 14:57:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.928 14:57:29 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:46.928 14:57:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.928 14:57:29 -- common/autotest_common.sh@10 -- # set +x 00:20:46.928 [2024-04-26 14:57:29.425324] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.928 14:57:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.928 14:57:29 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:46.928 14:57:29 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:46.928 14:57:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:46.928 14:57:29 -- common/autotest_common.sh@10 -- # set +x 00:20:46.928 14:57:29 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:46.928 14:57:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:46.928 14:57:29 -- target/shutdown.sh@28 -- # cat 00:20:46.928 14:57:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:46.928 14:57:29 -- target/shutdown.sh@28 -- # cat 00:20:46.928 14:57:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:46.928 14:57:29 -- target/shutdown.sh@28 -- # cat 00:20:46.928 14:57:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:46.928 14:57:29 -- target/shutdown.sh@28 -- # cat 00:20:46.928 14:57:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:46.928 14:57:29 -- target/shutdown.sh@28 -- # cat 00:20:46.928 14:57:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:46.928 14:57:29 -- target/shutdown.sh@28 -- # cat 00:20:46.928 14:57:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:46.928 14:57:29 -- target/shutdown.sh@28 -- # cat 00:20:46.928 14:57:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:46.928 14:57:29 -- target/shutdown.sh@28 -- # cat 00:20:46.928 14:57:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:46.928 14:57:29 -- target/shutdown.sh@28 -- # cat 00:20:46.928 14:57:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:46.928 14:57:29 -- target/shutdown.sh@28 -- # cat 00:20:46.928 14:57:29 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:46.928 14:57:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.928 14:57:29 -- common/autotest_common.sh@10 -- # set +x 00:20:46.928 Malloc1 00:20:46.928 [2024-04-26 14:57:29.528783] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.928 Malloc2 00:20:46.928 Malloc3 00:20:47.188 Malloc4 00:20:47.188 Malloc5 00:20:47.188 Malloc6 00:20:47.188 Malloc7 00:20:47.188 Malloc8 00:20:47.188 Malloc9 00:20:47.449 Malloc10 00:20:47.449 14:57:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.449 14:57:29 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:47.449 14:57:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:47.449 14:57:29 -- common/autotest_common.sh@10 -- # set +x 00:20:47.449 14:57:29 -- target/shutdown.sh@78 -- # perfpid=1123107 00:20:47.449 14:57:29 -- target/shutdown.sh@79 -- # waitforlisten 1123107 /var/tmp/bdevperf.sock 00:20:47.449 14:57:29 -- common/autotest_common.sh@817 -- # '[' -z 1123107 ']' 00:20:47.449 14:57:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.449 14:57:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:47.449 14:57:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.449 14:57:29 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:47.449 14:57:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:47.449 14:57:29 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:47.449 14:57:29 -- common/autotest_common.sh@10 -- # set +x 00:20:47.449 14:57:29 -- nvmf/common.sh@521 -- # config=() 00:20:47.449 14:57:29 -- nvmf/common.sh@521 -- # local subsystem config 00:20:47.449 14:57:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:47.449 14:57:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:47.449 { 00:20:47.449 "params": { 00:20:47.449 "name": "Nvme$subsystem", 00:20:47.449 "trtype": "$TEST_TRANSPORT", 00:20:47.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.449 "adrfam": "ipv4", 00:20:47.449 "trsvcid": "$NVMF_PORT", 00:20:47.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.449 "hdgst": ${hdgst:-false}, 00:20:47.449 "ddgst": ${ddgst:-false} 00:20:47.449 }, 00:20:47.450 "method": "bdev_nvme_attach_controller" 00:20:47.450 } 00:20:47.450 EOF 00:20:47.450 )") 00:20:47.450 14:57:29 -- nvmf/common.sh@543 -- # cat 00:20:47.450 14:57:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:47.450 14:57:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:47.450 { 00:20:47.450 "params": { 00:20:47.450 "name": "Nvme$subsystem", 00:20:47.450 "trtype": "$TEST_TRANSPORT", 00:20:47.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.450 "adrfam": "ipv4", 00:20:47.450 "trsvcid": "$NVMF_PORT", 00:20:47.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.450 "hdgst": ${hdgst:-false}, 00:20:47.450 "ddgst": ${ddgst:-false} 00:20:47.450 }, 00:20:47.450 "method": "bdev_nvme_attach_controller" 00:20:47.450 } 00:20:47.450 EOF 00:20:47.450 )") 00:20:47.450 14:57:29 -- nvmf/common.sh@543 -- # cat 00:20:47.450 14:57:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:47.450 14:57:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:47.450 { 00:20:47.450 "params": { 00:20:47.450 "name": "Nvme$subsystem", 00:20:47.450 "trtype": "$TEST_TRANSPORT", 00:20:47.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.450 "adrfam": "ipv4", 00:20:47.450 "trsvcid": "$NVMF_PORT", 00:20:47.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.450 "hdgst": ${hdgst:-false}, 00:20:47.450 "ddgst": ${ddgst:-false} 00:20:47.450 }, 00:20:47.450 "method": "bdev_nvme_attach_controller" 00:20:47.450 } 00:20:47.450 EOF 00:20:47.450 )") 00:20:47.450 14:57:29 -- nvmf/common.sh@543 -- # cat 00:20:47.450 14:57:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:47.450 14:57:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:47.450 { 00:20:47.450 "params": { 00:20:47.450 "name": "Nvme$subsystem", 00:20:47.450 "trtype": "$TEST_TRANSPORT", 00:20:47.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.450 "adrfam": "ipv4", 00:20:47.450 "trsvcid": "$NVMF_PORT", 00:20:47.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.450 "hdgst": ${hdgst:-false}, 00:20:47.450 "ddgst": ${ddgst:-false} 00:20:47.450 }, 00:20:47.450 "method": "bdev_nvme_attach_controller" 00:20:47.450 } 00:20:47.450 EOF 00:20:47.450 )") 00:20:47.450 14:57:29 -- nvmf/common.sh@543 -- # cat 00:20:47.450 14:57:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:47.450 14:57:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:47.450 { 00:20:47.450 "params": { 00:20:47.450 "name": "Nvme$subsystem", 00:20:47.450 "trtype": "$TEST_TRANSPORT", 00:20:47.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.450 "adrfam": "ipv4", 00:20:47.450 "trsvcid": "$NVMF_PORT", 00:20:47.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.450 "hdgst": ${hdgst:-false}, 00:20:47.450 "ddgst": ${ddgst:-false} 00:20:47.450 }, 00:20:47.450 "method": "bdev_nvme_attach_controller" 00:20:47.450 } 00:20:47.450 EOF 00:20:47.450 )") 00:20:47.450 14:57:29 -- nvmf/common.sh@543 -- # cat 00:20:47.450 14:57:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:47.450 14:57:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:47.450 { 00:20:47.450 "params": { 00:20:47.450 "name": "Nvme$subsystem", 00:20:47.450 "trtype": "$TEST_TRANSPORT", 00:20:47.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.450 "adrfam": "ipv4", 00:20:47.450 "trsvcid": "$NVMF_PORT", 00:20:47.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.450 "hdgst": ${hdgst:-false}, 00:20:47.450 "ddgst": ${ddgst:-false} 00:20:47.450 }, 00:20:47.450 "method": "bdev_nvme_attach_controller" 00:20:47.450 } 00:20:47.450 EOF 00:20:47.450 )") 00:20:47.450 14:57:29 -- nvmf/common.sh@543 -- # cat 00:20:47.450 [2024-04-26 14:57:29.978920] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:47.450 [2024-04-26 14:57:29.978971] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:47.450 14:57:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:47.450 14:57:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:47.450 { 00:20:47.450 "params": { 00:20:47.450 "name": "Nvme$subsystem", 00:20:47.450 "trtype": "$TEST_TRANSPORT", 00:20:47.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.450 "adrfam": "ipv4", 00:20:47.450 "trsvcid": "$NVMF_PORT", 00:20:47.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.450 "hdgst": ${hdgst:-false}, 00:20:47.450 "ddgst": ${ddgst:-false} 00:20:47.450 }, 00:20:47.450 "method": "bdev_nvme_attach_controller" 00:20:47.450 } 00:20:47.450 EOF 00:20:47.450 )") 00:20:47.450 14:57:29 -- nvmf/common.sh@543 -- # cat 00:20:47.450 14:57:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:47.450 14:57:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:47.450 { 00:20:47.450 "params": { 00:20:47.450 "name": "Nvme$subsystem", 00:20:47.450 "trtype": "$TEST_TRANSPORT", 00:20:47.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.450 "adrfam": "ipv4", 00:20:47.450 "trsvcid": "$NVMF_PORT", 00:20:47.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.450 "hdgst": ${hdgst:-false}, 00:20:47.450 "ddgst": ${ddgst:-false} 00:20:47.450 }, 00:20:47.450 "method": "bdev_nvme_attach_controller" 00:20:47.450 } 00:20:47.450 EOF 00:20:47.450 )") 00:20:47.450 14:57:29 -- nvmf/common.sh@543 -- # cat 00:20:47.450 14:57:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:47.450 14:57:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:47.450 { 00:20:47.450 "params": { 00:20:47.450 "name": "Nvme$subsystem", 00:20:47.450 "trtype": "$TEST_TRANSPORT", 00:20:47.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.450 "adrfam": "ipv4", 00:20:47.450 "trsvcid": "$NVMF_PORT", 00:20:47.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.450 "hdgst": ${hdgst:-false}, 00:20:47.450 "ddgst": ${ddgst:-false} 00:20:47.450 }, 00:20:47.450 "method": "bdev_nvme_attach_controller" 00:20:47.450 } 00:20:47.450 EOF 00:20:47.450 )") 00:20:47.450 14:57:29 -- nvmf/common.sh@543 -- # cat 00:20:47.450 14:57:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:47.450 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.450 14:57:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:47.450 { 00:20:47.450 "params": { 00:20:47.450 "name": "Nvme$subsystem", 00:20:47.450 "trtype": "$TEST_TRANSPORT", 00:20:47.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.450 "adrfam": "ipv4", 00:20:47.450 "trsvcid": "$NVMF_PORT", 00:20:47.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.450 "hdgst": ${hdgst:-false}, 00:20:47.450 "ddgst": ${ddgst:-false} 00:20:47.450 }, 00:20:47.450 "method": "bdev_nvme_attach_controller" 00:20:47.450 } 00:20:47.450 EOF 00:20:47.450 )") 00:20:47.450 14:57:30 -- nvmf/common.sh@543 -- # cat 00:20:47.450 14:57:30 -- nvmf/common.sh@545 -- # jq . 00:20:47.450 14:57:30 -- nvmf/common.sh@546 -- # IFS=, 00:20:47.450 14:57:30 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:47.450 "params": { 00:20:47.450 "name": "Nvme1", 00:20:47.450 "trtype": "tcp", 00:20:47.450 "traddr": "10.0.0.2", 00:20:47.450 "adrfam": "ipv4", 00:20:47.450 "trsvcid": "4420", 00:20:47.450 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.450 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.450 "hdgst": false, 00:20:47.450 "ddgst": false 00:20:47.450 }, 00:20:47.450 "method": "bdev_nvme_attach_controller" 00:20:47.450 },{ 00:20:47.450 "params": { 00:20:47.450 "name": "Nvme2", 00:20:47.450 "trtype": "tcp", 00:20:47.450 "traddr": "10.0.0.2", 00:20:47.450 "adrfam": "ipv4", 00:20:47.450 "trsvcid": "4420", 00:20:47.450 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:47.450 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:47.450 "hdgst": false, 00:20:47.450 "ddgst": false 00:20:47.450 }, 00:20:47.450 "method": "bdev_nvme_attach_controller" 00:20:47.450 },{ 00:20:47.450 "params": { 00:20:47.450 "name": "Nvme3", 00:20:47.450 "trtype": "tcp", 00:20:47.450 "traddr": "10.0.0.2", 00:20:47.450 "adrfam": "ipv4", 00:20:47.450 "trsvcid": "4420", 00:20:47.450 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:47.450 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:47.450 "hdgst": false, 00:20:47.450 "ddgst": false 00:20:47.450 }, 00:20:47.450 "method": "bdev_nvme_attach_controller" 00:20:47.450 },{ 00:20:47.450 "params": { 00:20:47.450 "name": "Nvme4", 00:20:47.450 "trtype": "tcp", 00:20:47.450 "traddr": "10.0.0.2", 00:20:47.450 "adrfam": "ipv4", 00:20:47.450 "trsvcid": "4420", 00:20:47.450 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:47.450 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:47.450 "hdgst": false, 00:20:47.450 "ddgst": false 00:20:47.450 }, 00:20:47.450 "method": "bdev_nvme_attach_controller" 00:20:47.450 },{ 00:20:47.450 "params": { 00:20:47.450 "name": "Nvme5", 00:20:47.450 "trtype": "tcp", 00:20:47.450 "traddr": "10.0.0.2", 00:20:47.450 "adrfam": "ipv4", 00:20:47.450 "trsvcid": "4420", 00:20:47.451 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:47.451 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:47.451 "hdgst": false, 00:20:47.451 "ddgst": false 00:20:47.451 }, 00:20:47.451 "method": "bdev_nvme_attach_controller" 00:20:47.451 },{ 00:20:47.451 "params": { 00:20:47.451 "name": "Nvme6", 00:20:47.451 "trtype": "tcp", 00:20:47.451 "traddr": "10.0.0.2", 00:20:47.451 "adrfam": "ipv4", 00:20:47.451 "trsvcid": "4420", 00:20:47.451 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:47.451 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:47.451 "hdgst": false, 00:20:47.451 "ddgst": false 00:20:47.451 }, 00:20:47.451 "method": "bdev_nvme_attach_controller" 00:20:47.451 },{ 00:20:47.451 "params": { 00:20:47.451 "name": "Nvme7", 00:20:47.451 "trtype": "tcp", 00:20:47.451 "traddr": "10.0.0.2", 00:20:47.451 "adrfam": "ipv4", 00:20:47.451 "trsvcid": "4420", 00:20:47.451 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:47.451 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:47.451 "hdgst": false, 00:20:47.451 "ddgst": false 00:20:47.451 }, 00:20:47.451 "method": "bdev_nvme_attach_controller" 00:20:47.451 },{ 00:20:47.451 "params": { 00:20:47.451 "name": "Nvme8", 00:20:47.451 "trtype": "tcp", 00:20:47.451 "traddr": "10.0.0.2", 00:20:47.451 "adrfam": "ipv4", 00:20:47.451 "trsvcid": "4420", 00:20:47.451 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:47.451 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:47.451 "hdgst": false, 00:20:47.451 "ddgst": false 00:20:47.451 }, 00:20:47.451 "method": "bdev_nvme_attach_controller" 00:20:47.451 },{ 00:20:47.451 "params": { 00:20:47.451 "name": "Nvme9", 00:20:47.451 "trtype": "tcp", 00:20:47.451 "traddr": "10.0.0.2", 00:20:47.451 "adrfam": "ipv4", 00:20:47.451 "trsvcid": "4420", 00:20:47.451 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:47.451 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:47.451 "hdgst": false, 00:20:47.451 "ddgst": false 00:20:47.451 }, 00:20:47.451 "method": "bdev_nvme_attach_controller" 00:20:47.451 },{ 00:20:47.451 "params": { 00:20:47.451 "name": "Nvme10", 00:20:47.451 "trtype": "tcp", 00:20:47.451 "traddr": "10.0.0.2", 00:20:47.451 "adrfam": "ipv4", 00:20:47.451 "trsvcid": "4420", 00:20:47.451 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:47.451 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:47.451 "hdgst": false, 00:20:47.451 "ddgst": false 00:20:47.451 }, 00:20:47.451 "method": "bdev_nvme_attach_controller" 00:20:47.451 }' 00:20:47.451 [2024-04-26 14:57:30.041684] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.451 [2024-04-26 14:57:30.104454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.833 14:57:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:48.833 14:57:31 -- common/autotest_common.sh@850 -- # return 0 00:20:48.833 14:57:31 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:48.833 14:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.833 14:57:31 -- common/autotest_common.sh@10 -- # set +x 00:20:48.833 14:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.833 14:57:31 -- target/shutdown.sh@83 -- # kill -9 1123107 00:20:48.833 14:57:31 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:48.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1123107 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:48.833 14:57:31 -- target/shutdown.sh@87 -- # sleep 1 00:20:49.796 14:57:32 -- target/shutdown.sh@88 -- # kill -0 1122722 00:20:49.796 14:57:32 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:49.796 14:57:32 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:49.796 14:57:32 -- nvmf/common.sh@521 -- # config=() 00:20:49.796 14:57:32 -- nvmf/common.sh@521 -- # local subsystem config 00:20:49.796 14:57:32 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:49.796 14:57:32 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:49.796 { 00:20:49.796 "params": { 00:20:49.796 "name": "Nvme$subsystem", 00:20:49.796 "trtype": "$TEST_TRANSPORT", 00:20:49.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.796 "adrfam": "ipv4", 00:20:49.796 "trsvcid": "$NVMF_PORT", 00:20:49.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.797 "hdgst": ${hdgst:-false}, 00:20:49.797 "ddgst": ${ddgst:-false} 00:20:49.797 }, 00:20:49.797 "method": "bdev_nvme_attach_controller" 00:20:49.797 } 00:20:49.797 EOF 00:20:49.797 )") 00:20:49.797 14:57:32 -- nvmf/common.sh@543 -- # cat 00:20:49.797 14:57:32 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:49.797 14:57:32 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:49.797 { 00:20:49.797 "params": { 00:20:49.797 "name": "Nvme$subsystem", 00:20:49.797 "trtype": "$TEST_TRANSPORT", 00:20:49.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.797 "adrfam": "ipv4", 00:20:49.797 "trsvcid": "$NVMF_PORT", 00:20:49.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.797 "hdgst": ${hdgst:-false}, 00:20:49.797 "ddgst": ${ddgst:-false} 00:20:49.797 }, 00:20:49.797 "method": "bdev_nvme_attach_controller" 00:20:49.797 } 00:20:49.797 EOF 00:20:49.797 )") 00:20:49.797 14:57:32 -- nvmf/common.sh@543 -- # cat 00:20:49.797 14:57:32 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:49.797 14:57:32 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:49.797 { 00:20:49.797 "params": { 00:20:49.797 "name": "Nvme$subsystem", 00:20:49.797 "trtype": "$TEST_TRANSPORT", 00:20:49.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.797 "adrfam": "ipv4", 00:20:49.797 "trsvcid": "$NVMF_PORT", 00:20:49.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.797 "hdgst": ${hdgst:-false}, 00:20:49.797 "ddgst": ${ddgst:-false} 00:20:49.797 }, 00:20:49.797 "method": "bdev_nvme_attach_controller" 00:20:49.797 } 00:20:49.797 EOF 00:20:49.797 )") 00:20:49.797 14:57:32 -- nvmf/common.sh@543 -- # cat 00:20:49.797 14:57:32 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:49.797 14:57:32 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:49.797 { 00:20:49.797 "params": { 00:20:49.797 "name": "Nvme$subsystem", 00:20:49.797 "trtype": "$TEST_TRANSPORT", 00:20:49.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.797 "adrfam": "ipv4", 00:20:49.797 "trsvcid": "$NVMF_PORT", 00:20:49.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.797 "hdgst": ${hdgst:-false}, 00:20:49.797 "ddgst": ${ddgst:-false} 00:20:49.797 }, 00:20:49.797 "method": "bdev_nvme_attach_controller" 00:20:49.797 } 00:20:49.797 EOF 00:20:49.797 )") 00:20:49.797 14:57:32 -- nvmf/common.sh@543 -- # cat 00:20:49.797 14:57:32 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:49.797 14:57:32 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:49.797 { 00:20:49.797 "params": { 00:20:49.797 "name": "Nvme$subsystem", 00:20:49.797 "trtype": "$TEST_TRANSPORT", 00:20:49.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.797 "adrfam": "ipv4", 00:20:49.797 "trsvcid": "$NVMF_PORT", 00:20:49.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.797 "hdgst": ${hdgst:-false}, 00:20:49.797 "ddgst": ${ddgst:-false} 00:20:49.797 }, 00:20:49.797 "method": "bdev_nvme_attach_controller" 00:20:49.797 } 00:20:49.797 EOF 00:20:49.797 )") 00:20:49.797 14:57:32 -- nvmf/common.sh@543 -- # cat 00:20:49.797 14:57:32 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:49.797 14:57:32 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:49.797 { 00:20:49.797 "params": { 00:20:49.797 "name": "Nvme$subsystem", 00:20:49.797 "trtype": "$TEST_TRANSPORT", 00:20:49.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.797 "adrfam": "ipv4", 00:20:49.797 "trsvcid": "$NVMF_PORT", 00:20:49.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.797 "hdgst": ${hdgst:-false}, 00:20:49.797 "ddgst": ${ddgst:-false} 00:20:49.797 }, 00:20:49.797 "method": "bdev_nvme_attach_controller" 00:20:49.797 } 00:20:49.797 EOF 00:20:49.797 )") 00:20:49.797 14:57:32 -- nvmf/common.sh@543 -- # cat 00:20:49.797 [2024-04-26 14:57:32.353893] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:49.797 [2024-04-26 14:57:32.353943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1123486 ] 00:20:49.797 14:57:32 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:49.797 14:57:32 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:49.797 { 00:20:49.797 "params": { 00:20:49.797 "name": "Nvme$subsystem", 00:20:49.797 "trtype": "$TEST_TRANSPORT", 00:20:49.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.797 "adrfam": "ipv4", 00:20:49.797 "trsvcid": "$NVMF_PORT", 00:20:49.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.797 "hdgst": ${hdgst:-false}, 00:20:49.797 "ddgst": ${ddgst:-false} 00:20:49.797 }, 00:20:49.797 "method": "bdev_nvme_attach_controller" 00:20:49.797 } 00:20:49.797 EOF 00:20:49.797 )") 00:20:49.797 14:57:32 -- nvmf/common.sh@543 -- # cat 00:20:49.797 14:57:32 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:49.797 14:57:32 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:49.797 { 00:20:49.797 "params": { 00:20:49.797 "name": "Nvme$subsystem", 00:20:49.797 "trtype": "$TEST_TRANSPORT", 00:20:49.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.797 "adrfam": "ipv4", 00:20:49.797 "trsvcid": "$NVMF_PORT", 00:20:49.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.797 "hdgst": ${hdgst:-false}, 00:20:49.797 "ddgst": ${ddgst:-false} 00:20:49.797 }, 00:20:49.797 "method": "bdev_nvme_attach_controller" 00:20:49.797 } 00:20:49.797 EOF 00:20:49.797 )") 00:20:49.797 14:57:32 -- nvmf/common.sh@543 -- # cat 00:20:49.797 14:57:32 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:49.797 14:57:32 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:49.797 { 00:20:49.797 "params": { 00:20:49.797 "name": "Nvme$subsystem", 00:20:49.797 "trtype": "$TEST_TRANSPORT", 00:20:49.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.797 "adrfam": "ipv4", 00:20:49.797 "trsvcid": "$NVMF_PORT", 00:20:49.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.797 "hdgst": ${hdgst:-false}, 00:20:49.797 "ddgst": ${ddgst:-false} 00:20:49.797 }, 00:20:49.797 "method": "bdev_nvme_attach_controller" 00:20:49.797 } 00:20:49.797 EOF 00:20:49.797 )") 00:20:49.797 14:57:32 -- nvmf/common.sh@543 -- # cat 00:20:49.797 14:57:32 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:49.797 14:57:32 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:49.797 { 00:20:49.797 "params": { 00:20:49.797 "name": "Nvme$subsystem", 00:20:49.797 "trtype": "$TEST_TRANSPORT", 00:20:49.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.797 "adrfam": "ipv4", 00:20:49.797 "trsvcid": "$NVMF_PORT", 00:20:49.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.797 "hdgst": ${hdgst:-false}, 00:20:49.797 "ddgst": ${ddgst:-false} 00:20:49.797 }, 00:20:49.797 "method": "bdev_nvme_attach_controller" 00:20:49.797 } 00:20:49.797 EOF 00:20:49.797 )") 00:20:49.797 14:57:32 -- nvmf/common.sh@543 -- # cat 00:20:49.797 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.797 14:57:32 -- nvmf/common.sh@545 -- # jq . 00:20:49.797 14:57:32 -- nvmf/common.sh@546 -- # IFS=, 00:20:49.797 14:57:32 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:49.797 "params": { 00:20:49.797 "name": "Nvme1", 00:20:49.797 "trtype": "tcp", 00:20:49.797 "traddr": "10.0.0.2", 00:20:49.797 "adrfam": "ipv4", 00:20:49.797 "trsvcid": "4420", 00:20:49.797 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.797 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:49.797 "hdgst": false, 00:20:49.797 "ddgst": false 00:20:49.797 }, 00:20:49.797 "method": "bdev_nvme_attach_controller" 00:20:49.797 },{ 00:20:49.797 "params": { 00:20:49.797 "name": "Nvme2", 00:20:49.797 "trtype": "tcp", 00:20:49.797 "traddr": "10.0.0.2", 00:20:49.797 "adrfam": "ipv4", 00:20:49.797 "trsvcid": "4420", 00:20:49.797 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:49.797 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:49.797 "hdgst": false, 00:20:49.797 "ddgst": false 00:20:49.797 }, 00:20:49.798 "method": "bdev_nvme_attach_controller" 00:20:49.798 },{ 00:20:49.798 "params": { 00:20:49.798 "name": "Nvme3", 00:20:49.798 "trtype": "tcp", 00:20:49.798 "traddr": "10.0.0.2", 00:20:49.798 "adrfam": "ipv4", 00:20:49.798 "trsvcid": "4420", 00:20:49.798 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:49.798 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:49.798 "hdgst": false, 00:20:49.798 "ddgst": false 00:20:49.798 }, 00:20:49.798 "method": "bdev_nvme_attach_controller" 00:20:49.798 },{ 00:20:49.798 "params": { 00:20:49.798 "name": "Nvme4", 00:20:49.798 "trtype": "tcp", 00:20:49.798 "traddr": "10.0.0.2", 00:20:49.798 "adrfam": "ipv4", 00:20:49.798 "trsvcid": "4420", 00:20:49.798 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:49.798 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:49.798 "hdgst": false, 00:20:49.798 "ddgst": false 00:20:49.798 }, 00:20:49.798 "method": "bdev_nvme_attach_controller" 00:20:49.798 },{ 00:20:49.798 "params": { 00:20:49.798 "name": "Nvme5", 00:20:49.798 "trtype": "tcp", 00:20:49.798 "traddr": "10.0.0.2", 00:20:49.798 "adrfam": "ipv4", 00:20:49.798 "trsvcid": "4420", 00:20:49.798 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:49.798 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:49.798 "hdgst": false, 00:20:49.798 "ddgst": false 00:20:49.798 }, 00:20:49.798 "method": "bdev_nvme_attach_controller" 00:20:49.798 },{ 00:20:49.798 "params": { 00:20:49.798 "name": "Nvme6", 00:20:49.798 "trtype": "tcp", 00:20:49.798 "traddr": "10.0.0.2", 00:20:49.798 "adrfam": "ipv4", 00:20:49.798 "trsvcid": "4420", 00:20:49.798 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:49.798 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:49.798 "hdgst": false, 00:20:49.798 "ddgst": false 00:20:49.798 }, 00:20:49.798 "method": "bdev_nvme_attach_controller" 00:20:49.798 },{ 00:20:49.798 "params": { 00:20:49.798 "name": "Nvme7", 00:20:49.798 "trtype": "tcp", 00:20:49.798 "traddr": "10.0.0.2", 00:20:49.798 "adrfam": "ipv4", 00:20:49.798 "trsvcid": "4420", 00:20:49.798 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:49.798 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:49.798 "hdgst": false, 00:20:49.798 "ddgst": false 00:20:49.798 }, 00:20:49.798 "method": "bdev_nvme_attach_controller" 00:20:49.798 },{ 00:20:49.798 "params": { 00:20:49.798 "name": "Nvme8", 00:20:49.798 "trtype": "tcp", 00:20:49.798 "traddr": "10.0.0.2", 00:20:49.798 "adrfam": "ipv4", 00:20:49.798 "trsvcid": "4420", 00:20:49.798 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:49.798 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:49.798 "hdgst": false, 00:20:49.798 "ddgst": false 00:20:49.798 }, 00:20:49.798 "method": "bdev_nvme_attach_controller" 00:20:49.798 },{ 00:20:49.798 "params": { 00:20:49.798 "name": "Nvme9", 00:20:49.798 "trtype": "tcp", 00:20:49.798 "traddr": "10.0.0.2", 00:20:49.798 "adrfam": "ipv4", 00:20:49.798 "trsvcid": "4420", 00:20:49.798 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:49.798 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:49.798 "hdgst": false, 00:20:49.798 "ddgst": false 00:20:49.798 }, 00:20:49.798 "method": "bdev_nvme_attach_controller" 00:20:49.798 },{ 00:20:49.798 "params": { 00:20:49.798 "name": "Nvme10", 00:20:49.798 "trtype": "tcp", 00:20:49.798 "traddr": "10.0.0.2", 00:20:49.798 "adrfam": "ipv4", 00:20:49.798 "trsvcid": "4420", 00:20:49.798 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:49.798 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:49.798 "hdgst": false, 00:20:49.798 "ddgst": false 00:20:49.798 }, 00:20:49.798 "method": "bdev_nvme_attach_controller" 00:20:49.798 }' 00:20:49.798 [2024-04-26 14:57:32.414413] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.058 [2024-04-26 14:57:32.476982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.996 Running I/O for 1 seconds... 00:20:52.379 00:20:52.379 Latency(us) 00:20:52.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.379 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.379 Verification LBA range: start 0x0 length 0x400 00:20:52.379 Nvme1n1 : 1.15 222.69 13.92 0.00 0.00 284606.72 18131.63 251658.24 00:20:52.379 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.379 Verification LBA range: start 0x0 length 0x400 00:20:52.379 Nvme2n1 : 1.15 223.56 13.97 0.00 0.00 278606.08 19660.80 295348.91 00:20:52.379 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.379 Verification LBA range: start 0x0 length 0x400 00:20:52.379 Nvme3n1 : 1.18 271.83 16.99 0.00 0.00 224042.67 17039.36 242920.11 00:20:52.379 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.379 Verification LBA range: start 0x0 length 0x400 00:20:52.379 Nvme4n1 : 1.10 232.85 14.55 0.00 0.00 257864.75 18022.40 263891.63 00:20:52.379 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.379 Verification LBA range: start 0x0 length 0x400 00:20:52.379 Nvme5n1 : 1.14 228.30 14.27 0.00 0.00 257329.20 7918.93 228939.09 00:20:52.379 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.379 Verification LBA range: start 0x0 length 0x400 00:20:52.379 Nvme6n1 : 1.19 269.94 16.87 0.00 0.00 214516.05 8574.29 251658.24 00:20:52.379 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.379 Verification LBA range: start 0x0 length 0x400 00:20:52.379 Nvme7n1 : 1.19 268.72 16.79 0.00 0.00 213148.67 15073.28 248162.99 00:20:52.379 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.379 Verification LBA range: start 0x0 length 0x400 00:20:52.379 Nvme8n1 : 1.14 225.41 14.09 0.00 0.00 248052.48 18459.31 246415.36 00:20:52.379 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.379 Verification LBA range: start 0x0 length 0x400 00:20:52.379 Nvme9n1 : 1.18 216.78 13.55 0.00 0.00 254424.96 18240.85 274377.39 00:20:52.379 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.379 Verification LBA range: start 0x0 length 0x400 00:20:52.379 Nvme10n1 : 1.20 266.70 16.67 0.00 0.00 203768.23 10321.92 256901.12 00:20:52.379 =================================================================================================================== 00:20:52.379 Total : 2426.77 151.67 0.00 0.00 240953.15 7918.93 295348.91 00:20:52.379 14:57:34 -- target/shutdown.sh@94 -- # stoptarget 00:20:52.379 14:57:34 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:52.379 14:57:34 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:52.379 14:57:34 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:52.379 14:57:34 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:52.379 14:57:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:52.379 14:57:35 -- nvmf/common.sh@117 -- # sync 00:20:52.379 14:57:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:52.379 14:57:35 -- nvmf/common.sh@120 -- # set +e 00:20:52.379 14:57:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:52.379 14:57:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:52.379 rmmod nvme_tcp 00:20:52.379 rmmod nvme_fabrics 00:20:52.379 rmmod nvme_keyring 00:20:52.640 14:57:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:52.640 14:57:35 -- nvmf/common.sh@124 -- # set -e 00:20:52.640 14:57:35 -- nvmf/common.sh@125 -- # return 0 00:20:52.640 14:57:35 -- nvmf/common.sh@478 -- # '[' -n 1122722 ']' 00:20:52.640 14:57:35 -- nvmf/common.sh@479 -- # killprocess 1122722 00:20:52.640 14:57:35 -- common/autotest_common.sh@936 -- # '[' -z 1122722 ']' 00:20:52.640 14:57:35 -- common/autotest_common.sh@940 -- # kill -0 1122722 00:20:52.640 14:57:35 -- common/autotest_common.sh@941 -- # uname 00:20:52.640 14:57:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:52.640 14:57:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1122722 00:20:52.640 14:57:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:52.640 14:57:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:52.640 14:57:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1122722' 00:20:52.640 killing process with pid 1122722 00:20:52.640 14:57:35 -- common/autotest_common.sh@955 -- # kill 1122722 00:20:52.640 14:57:35 -- common/autotest_common.sh@960 -- # wait 1122722 00:20:52.901 14:57:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:52.901 14:57:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:52.901 14:57:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:52.901 14:57:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:52.901 14:57:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:52.901 14:57:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.901 14:57:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:52.901 14:57:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.812 14:57:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:54.812 00:20:54.812 real 0m15.768s 00:20:54.812 user 0m31.777s 00:20:54.812 sys 0m6.240s 00:20:54.812 14:57:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:54.812 14:57:37 -- common/autotest_common.sh@10 -- # set +x 00:20:54.812 ************************************ 00:20:54.813 END TEST nvmf_shutdown_tc1 00:20:54.813 ************************************ 00:20:54.813 14:57:37 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:54.813 14:57:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:54.813 14:57:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:54.813 14:57:37 -- common/autotest_common.sh@10 -- # set +x 00:20:55.073 ************************************ 00:20:55.073 START TEST nvmf_shutdown_tc2 00:20:55.073 ************************************ 00:20:55.073 14:57:37 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:20:55.073 14:57:37 -- target/shutdown.sh@99 -- # starttarget 00:20:55.073 14:57:37 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:55.073 14:57:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:55.073 14:57:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:55.073 14:57:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:55.073 14:57:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:55.073 14:57:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:55.073 14:57:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.073 14:57:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:55.073 14:57:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.073 14:57:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:55.073 14:57:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:55.073 14:57:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:55.073 14:57:37 -- common/autotest_common.sh@10 -- # set +x 00:20:55.073 14:57:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:55.073 14:57:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:55.073 14:57:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:55.073 14:57:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:55.073 14:57:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:55.073 14:57:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:55.073 14:57:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:55.073 14:57:37 -- nvmf/common.sh@295 -- # net_devs=() 00:20:55.073 14:57:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:55.073 14:57:37 -- nvmf/common.sh@296 -- # e810=() 00:20:55.073 14:57:37 -- nvmf/common.sh@296 -- # local -ga e810 00:20:55.073 14:57:37 -- nvmf/common.sh@297 -- # x722=() 00:20:55.073 14:57:37 -- nvmf/common.sh@297 -- # local -ga x722 00:20:55.073 14:57:37 -- nvmf/common.sh@298 -- # mlx=() 00:20:55.073 14:57:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:55.073 14:57:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:55.073 14:57:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:55.073 14:57:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:55.073 14:57:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:55.073 14:57:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:55.073 14:57:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:55.073 14:57:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:55.073 14:57:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:55.073 14:57:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:55.074 14:57:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:55.074 14:57:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:55.074 14:57:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:55.074 14:57:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:55.074 14:57:37 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:55.074 14:57:37 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:55.074 14:57:37 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:55.074 14:57:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:55.074 14:57:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:55.074 14:57:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:55.074 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:55.074 14:57:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:55.074 14:57:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:55.074 14:57:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.074 14:57:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.074 14:57:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:55.074 14:57:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:55.074 14:57:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:55.074 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:55.074 14:57:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:55.074 14:57:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:55.074 14:57:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.074 14:57:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.074 14:57:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:55.074 14:57:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:55.074 14:57:37 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:55.074 14:57:37 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:55.074 14:57:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:55.074 14:57:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.074 14:57:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:55.074 14:57:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.074 14:57:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:55.074 Found net devices under 0000:31:00.0: cvl_0_0 00:20:55.074 14:57:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.074 14:57:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:55.074 14:57:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.074 14:57:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:55.074 14:57:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.074 14:57:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:55.074 Found net devices under 0000:31:00.1: cvl_0_1 00:20:55.074 14:57:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.074 14:57:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:55.074 14:57:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:55.074 14:57:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:55.074 14:57:37 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:55.074 14:57:37 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:55.074 14:57:37 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.074 14:57:37 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:55.074 14:57:37 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:55.074 14:57:37 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:55.074 14:57:37 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:55.074 14:57:37 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:55.074 14:57:37 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:55.074 14:57:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:55.074 14:57:37 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.074 14:57:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:55.074 14:57:37 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:55.074 14:57:37 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:55.074 14:57:37 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:55.335 14:57:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:55.335 14:57:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:55.335 14:57:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:55.335 14:57:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:55.335 14:57:37 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:55.335 14:57:37 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:55.335 14:57:37 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:55.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:20:55.335 00:20:55.335 --- 10.0.0.2 ping statistics --- 00:20:55.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.335 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:20:55.335 14:57:37 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:55.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:20:55.335 00:20:55.335 --- 10.0.0.1 ping statistics --- 00:20:55.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.335 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:20:55.335 14:57:37 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.335 14:57:37 -- nvmf/common.sh@411 -- # return 0 00:20:55.335 14:57:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:55.335 14:57:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.335 14:57:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:55.335 14:57:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:55.335 14:57:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.335 14:57:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:55.335 14:57:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:55.596 14:57:38 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:55.596 14:57:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:55.596 14:57:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:55.596 14:57:38 -- common/autotest_common.sh@10 -- # set +x 00:20:55.596 14:57:38 -- nvmf/common.sh@470 -- # nvmfpid=1124803 00:20:55.596 14:57:38 -- nvmf/common.sh@471 -- # waitforlisten 1124803 00:20:55.596 14:57:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:55.596 14:57:38 -- common/autotest_common.sh@817 -- # '[' -z 1124803 ']' 00:20:55.596 14:57:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.596 14:57:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:55.596 14:57:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.596 14:57:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:55.596 14:57:38 -- common/autotest_common.sh@10 -- # set +x 00:20:55.596 [2024-04-26 14:57:38.089477] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:55.596 [2024-04-26 14:57:38.089569] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.596 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.596 [2024-04-26 14:57:38.178060] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:55.596 [2024-04-26 14:57:38.237686] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.596 [2024-04-26 14:57:38.237724] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.596 [2024-04-26 14:57:38.237730] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.596 [2024-04-26 14:57:38.237734] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.596 [2024-04-26 14:57:38.237739] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.596 [2024-04-26 14:57:38.237869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.596 [2024-04-26 14:57:38.238037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:55.596 [2024-04-26 14:57:38.238195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.596 [2024-04-26 14:57:38.238197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:56.534 14:57:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:56.534 14:57:38 -- common/autotest_common.sh@850 -- # return 0 00:20:56.534 14:57:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:56.534 14:57:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:56.534 14:57:38 -- common/autotest_common.sh@10 -- # set +x 00:20:56.534 14:57:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.534 14:57:38 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:56.534 14:57:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.534 14:57:38 -- common/autotest_common.sh@10 -- # set +x 00:20:56.534 [2024-04-26 14:57:38.897910] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.534 14:57:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.534 14:57:38 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:56.534 14:57:38 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:56.534 14:57:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:56.534 14:57:38 -- common/autotest_common.sh@10 -- # set +x 00:20:56.534 14:57:38 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:56.534 14:57:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.534 14:57:38 -- target/shutdown.sh@28 -- # cat 00:20:56.534 14:57:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.534 14:57:38 -- target/shutdown.sh@28 -- # cat 00:20:56.534 14:57:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.534 14:57:38 -- target/shutdown.sh@28 -- # cat 00:20:56.534 14:57:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.534 14:57:38 -- target/shutdown.sh@28 -- # cat 00:20:56.534 14:57:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.534 14:57:38 -- target/shutdown.sh@28 -- # cat 00:20:56.534 14:57:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.534 14:57:38 -- target/shutdown.sh@28 -- # cat 00:20:56.534 14:57:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.534 14:57:38 -- target/shutdown.sh@28 -- # cat 00:20:56.534 14:57:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.534 14:57:38 -- target/shutdown.sh@28 -- # cat 00:20:56.534 14:57:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.534 14:57:38 -- target/shutdown.sh@28 -- # cat 00:20:56.534 14:57:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.534 14:57:38 -- target/shutdown.sh@28 -- # cat 00:20:56.534 14:57:38 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:56.534 14:57:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.534 14:57:38 -- common/autotest_common.sh@10 -- # set +x 00:20:56.534 Malloc1 00:20:56.534 [2024-04-26 14:57:38.996695] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.534 Malloc2 00:20:56.534 Malloc3 00:20:56.534 Malloc4 00:20:56.534 Malloc5 00:20:56.534 Malloc6 00:20:56.793 Malloc7 00:20:56.793 Malloc8 00:20:56.793 Malloc9 00:20:56.793 Malloc10 00:20:56.793 14:57:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.793 14:57:39 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:56.793 14:57:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:56.793 14:57:39 -- common/autotest_common.sh@10 -- # set +x 00:20:56.793 14:57:39 -- target/shutdown.sh@103 -- # perfpid=1125007 00:20:56.793 14:57:39 -- target/shutdown.sh@104 -- # waitforlisten 1125007 /var/tmp/bdevperf.sock 00:20:56.793 14:57:39 -- common/autotest_common.sh@817 -- # '[' -z 1125007 ']' 00:20:56.793 14:57:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:56.793 14:57:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:56.793 14:57:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:56.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:56.793 14:57:39 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:56.793 14:57:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:56.793 14:57:39 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:56.793 14:57:39 -- common/autotest_common.sh@10 -- # set +x 00:20:56.793 14:57:39 -- nvmf/common.sh@521 -- # config=() 00:20:56.793 14:57:39 -- nvmf/common.sh@521 -- # local subsystem config 00:20:56.793 14:57:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:56.793 14:57:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:56.793 { 00:20:56.793 "params": { 00:20:56.793 "name": "Nvme$subsystem", 00:20:56.793 "trtype": "$TEST_TRANSPORT", 00:20:56.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.793 "adrfam": "ipv4", 00:20:56.793 "trsvcid": "$NVMF_PORT", 00:20:56.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.793 "hdgst": ${hdgst:-false}, 00:20:56.793 "ddgst": ${ddgst:-false} 00:20:56.793 }, 00:20:56.793 "method": "bdev_nvme_attach_controller" 00:20:56.793 } 00:20:56.793 EOF 00:20:56.793 )") 00:20:56.793 14:57:39 -- nvmf/common.sh@543 -- # cat 00:20:56.793 14:57:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:56.793 14:57:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:56.793 { 00:20:56.793 "params": { 00:20:56.793 "name": "Nvme$subsystem", 00:20:56.793 "trtype": "$TEST_TRANSPORT", 00:20:56.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.793 "adrfam": "ipv4", 00:20:56.793 "trsvcid": "$NVMF_PORT", 00:20:56.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.793 "hdgst": ${hdgst:-false}, 00:20:56.793 "ddgst": ${ddgst:-false} 00:20:56.793 }, 00:20:56.793 "method": "bdev_nvme_attach_controller" 00:20:56.793 } 00:20:56.793 EOF 00:20:56.793 )") 00:20:56.794 14:57:39 -- nvmf/common.sh@543 -- # cat 00:20:56.794 14:57:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:56.794 14:57:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:56.794 { 00:20:56.794 "params": { 00:20:56.794 "name": "Nvme$subsystem", 00:20:56.794 "trtype": "$TEST_TRANSPORT", 00:20:56.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.794 "adrfam": "ipv4", 00:20:56.794 "trsvcid": "$NVMF_PORT", 00:20:56.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.794 "hdgst": ${hdgst:-false}, 00:20:56.794 "ddgst": ${ddgst:-false} 00:20:56.794 }, 00:20:56.794 "method": "bdev_nvme_attach_controller" 00:20:56.794 } 00:20:56.794 EOF 00:20:56.794 )") 00:20:56.794 14:57:39 -- nvmf/common.sh@543 -- # cat 00:20:56.794 14:57:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:56.794 14:57:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:56.794 { 00:20:56.794 "params": { 00:20:56.794 "name": "Nvme$subsystem", 00:20:56.794 "trtype": "$TEST_TRANSPORT", 00:20:56.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.794 "adrfam": "ipv4", 00:20:56.794 "trsvcid": "$NVMF_PORT", 00:20:56.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.794 "hdgst": ${hdgst:-false}, 00:20:56.794 "ddgst": ${ddgst:-false} 00:20:56.794 }, 00:20:56.794 "method": "bdev_nvme_attach_controller" 00:20:56.794 } 00:20:56.794 EOF 00:20:56.794 )") 00:20:56.794 14:57:39 -- nvmf/common.sh@543 -- # cat 00:20:56.794 14:57:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:56.794 14:57:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:56.794 { 00:20:56.794 "params": { 00:20:56.794 "name": "Nvme$subsystem", 00:20:56.794 "trtype": "$TEST_TRANSPORT", 00:20:56.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.794 "adrfam": "ipv4", 00:20:56.794 "trsvcid": "$NVMF_PORT", 00:20:56.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.794 "hdgst": ${hdgst:-false}, 00:20:56.794 "ddgst": ${ddgst:-false} 00:20:56.794 }, 00:20:56.794 "method": "bdev_nvme_attach_controller" 00:20:56.794 } 00:20:56.794 EOF 00:20:56.794 )") 00:20:56.794 14:57:39 -- nvmf/common.sh@543 -- # cat 00:20:56.794 14:57:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:56.794 14:57:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:56.794 { 00:20:56.794 "params": { 00:20:56.794 "name": "Nvme$subsystem", 00:20:56.794 "trtype": "$TEST_TRANSPORT", 00:20:56.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.794 "adrfam": "ipv4", 00:20:56.794 "trsvcid": "$NVMF_PORT", 00:20:56.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.794 "hdgst": ${hdgst:-false}, 00:20:56.794 "ddgst": ${ddgst:-false} 00:20:56.794 }, 00:20:56.794 "method": "bdev_nvme_attach_controller" 00:20:56.794 } 00:20:56.794 EOF 00:20:56.794 )") 00:20:56.794 14:57:39 -- nvmf/common.sh@543 -- # cat 00:20:56.794 [2024-04-26 14:57:39.437654] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:56.794 [2024-04-26 14:57:39.437705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1125007 ] 00:20:56.794 14:57:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:56.794 14:57:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:56.794 { 00:20:56.794 "params": { 00:20:56.794 "name": "Nvme$subsystem", 00:20:56.794 "trtype": "$TEST_TRANSPORT", 00:20:56.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.794 "adrfam": "ipv4", 00:20:56.794 "trsvcid": "$NVMF_PORT", 00:20:56.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.794 "hdgst": ${hdgst:-false}, 00:20:56.794 "ddgst": ${ddgst:-false} 00:20:56.794 }, 00:20:56.794 "method": "bdev_nvme_attach_controller" 00:20:56.794 } 00:20:56.794 EOF 00:20:56.794 )") 00:20:56.794 14:57:39 -- nvmf/common.sh@543 -- # cat 00:20:56.794 14:57:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:56.794 14:57:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:56.794 { 00:20:56.794 "params": { 00:20:56.794 "name": "Nvme$subsystem", 00:20:56.794 "trtype": "$TEST_TRANSPORT", 00:20:56.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.794 "adrfam": "ipv4", 00:20:56.794 "trsvcid": "$NVMF_PORT", 00:20:56.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.794 "hdgst": ${hdgst:-false}, 00:20:56.794 "ddgst": ${ddgst:-false} 00:20:56.794 }, 00:20:56.794 "method": "bdev_nvme_attach_controller" 00:20:56.794 } 00:20:56.794 EOF 00:20:56.794 )") 00:20:56.794 14:57:39 -- nvmf/common.sh@543 -- # cat 00:20:56.794 14:57:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:56.794 14:57:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:56.794 { 00:20:56.794 "params": { 00:20:56.794 "name": "Nvme$subsystem", 00:20:56.794 "trtype": "$TEST_TRANSPORT", 00:20:56.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.794 "adrfam": "ipv4", 00:20:56.794 "trsvcid": "$NVMF_PORT", 00:20:56.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.794 "hdgst": ${hdgst:-false}, 00:20:56.794 "ddgst": ${ddgst:-false} 00:20:56.794 }, 00:20:56.794 "method": "bdev_nvme_attach_controller" 00:20:56.794 } 00:20:56.794 EOF 00:20:56.794 )") 00:20:56.794 14:57:39 -- nvmf/common.sh@543 -- # cat 00:20:57.054 14:57:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:57.054 14:57:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:57.054 { 00:20:57.054 "params": { 00:20:57.054 "name": "Nvme$subsystem", 00:20:57.054 "trtype": "$TEST_TRANSPORT", 00:20:57.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.054 "adrfam": "ipv4", 00:20:57.054 "trsvcid": "$NVMF_PORT", 00:20:57.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.054 "hdgst": ${hdgst:-false}, 00:20:57.054 "ddgst": ${ddgst:-false} 00:20:57.054 }, 00:20:57.054 "method": "bdev_nvme_attach_controller" 00:20:57.054 } 00:20:57.054 EOF 00:20:57.054 )") 00:20:57.054 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.054 14:57:39 -- nvmf/common.sh@543 -- # cat 00:20:57.054 14:57:39 -- nvmf/common.sh@545 -- # jq . 00:20:57.054 14:57:39 -- nvmf/common.sh@546 -- # IFS=, 00:20:57.054 14:57:39 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:57.054 "params": { 00:20:57.054 "name": "Nvme1", 00:20:57.054 "trtype": "tcp", 00:20:57.054 "traddr": "10.0.0.2", 00:20:57.054 "adrfam": "ipv4", 00:20:57.054 "trsvcid": "4420", 00:20:57.054 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.054 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:57.054 "hdgst": false, 00:20:57.054 "ddgst": false 00:20:57.055 }, 00:20:57.055 "method": "bdev_nvme_attach_controller" 00:20:57.055 },{ 00:20:57.055 "params": { 00:20:57.055 "name": "Nvme2", 00:20:57.055 "trtype": "tcp", 00:20:57.055 "traddr": "10.0.0.2", 00:20:57.055 "adrfam": "ipv4", 00:20:57.055 "trsvcid": "4420", 00:20:57.055 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:57.055 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:57.055 "hdgst": false, 00:20:57.055 "ddgst": false 00:20:57.055 }, 00:20:57.055 "method": "bdev_nvme_attach_controller" 00:20:57.055 },{ 00:20:57.055 "params": { 00:20:57.055 "name": "Nvme3", 00:20:57.055 "trtype": "tcp", 00:20:57.055 "traddr": "10.0.0.2", 00:20:57.055 "adrfam": "ipv4", 00:20:57.055 "trsvcid": "4420", 00:20:57.055 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:57.055 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:57.055 "hdgst": false, 00:20:57.055 "ddgst": false 00:20:57.055 }, 00:20:57.055 "method": "bdev_nvme_attach_controller" 00:20:57.055 },{ 00:20:57.055 "params": { 00:20:57.055 "name": "Nvme4", 00:20:57.055 "trtype": "tcp", 00:20:57.055 "traddr": "10.0.0.2", 00:20:57.055 "adrfam": "ipv4", 00:20:57.055 "trsvcid": "4420", 00:20:57.055 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:57.055 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:57.055 "hdgst": false, 00:20:57.055 "ddgst": false 00:20:57.055 }, 00:20:57.055 "method": "bdev_nvme_attach_controller" 00:20:57.055 },{ 00:20:57.055 "params": { 00:20:57.055 "name": "Nvme5", 00:20:57.055 "trtype": "tcp", 00:20:57.055 "traddr": "10.0.0.2", 00:20:57.055 "adrfam": "ipv4", 00:20:57.055 "trsvcid": "4420", 00:20:57.055 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:57.055 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:57.055 "hdgst": false, 00:20:57.055 "ddgst": false 00:20:57.055 }, 00:20:57.055 "method": "bdev_nvme_attach_controller" 00:20:57.055 },{ 00:20:57.055 "params": { 00:20:57.055 "name": "Nvme6", 00:20:57.055 "trtype": "tcp", 00:20:57.055 "traddr": "10.0.0.2", 00:20:57.055 "adrfam": "ipv4", 00:20:57.055 "trsvcid": "4420", 00:20:57.055 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:57.055 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:57.055 "hdgst": false, 00:20:57.055 "ddgst": false 00:20:57.055 }, 00:20:57.055 "method": "bdev_nvme_attach_controller" 00:20:57.055 },{ 00:20:57.055 "params": { 00:20:57.055 "name": "Nvme7", 00:20:57.055 "trtype": "tcp", 00:20:57.055 "traddr": "10.0.0.2", 00:20:57.055 "adrfam": "ipv4", 00:20:57.055 "trsvcid": "4420", 00:20:57.055 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:57.055 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:57.055 "hdgst": false, 00:20:57.055 "ddgst": false 00:20:57.055 }, 00:20:57.055 "method": "bdev_nvme_attach_controller" 00:20:57.055 },{ 00:20:57.055 "params": { 00:20:57.055 "name": "Nvme8", 00:20:57.055 "trtype": "tcp", 00:20:57.055 "traddr": "10.0.0.2", 00:20:57.055 "adrfam": "ipv4", 00:20:57.055 "trsvcid": "4420", 00:20:57.055 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:57.055 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:57.055 "hdgst": false, 00:20:57.055 "ddgst": false 00:20:57.055 }, 00:20:57.055 "method": "bdev_nvme_attach_controller" 00:20:57.055 },{ 00:20:57.055 "params": { 00:20:57.055 "name": "Nvme9", 00:20:57.055 "trtype": "tcp", 00:20:57.055 "traddr": "10.0.0.2", 00:20:57.055 "adrfam": "ipv4", 00:20:57.055 "trsvcid": "4420", 00:20:57.055 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:57.055 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:57.055 "hdgst": false, 00:20:57.055 "ddgst": false 00:20:57.055 }, 00:20:57.055 "method": "bdev_nvme_attach_controller" 00:20:57.055 },{ 00:20:57.055 "params": { 00:20:57.055 "name": "Nvme10", 00:20:57.055 "trtype": "tcp", 00:20:57.055 "traddr": "10.0.0.2", 00:20:57.055 "adrfam": "ipv4", 00:20:57.055 "trsvcid": "4420", 00:20:57.055 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:57.055 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:57.055 "hdgst": false, 00:20:57.055 "ddgst": false 00:20:57.055 }, 00:20:57.055 "method": "bdev_nvme_attach_controller" 00:20:57.055 }' 00:20:57.055 [2024-04-26 14:57:39.498347] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.055 [2024-04-26 14:57:39.561332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.962 Running I/O for 10 seconds... 00:20:58.962 14:57:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:58.962 14:57:41 -- common/autotest_common.sh@850 -- # return 0 00:20:58.962 14:57:41 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:58.962 14:57:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.962 14:57:41 -- common/autotest_common.sh@10 -- # set +x 00:20:58.962 14:57:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.962 14:57:41 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:58.962 14:57:41 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:58.962 14:57:41 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:58.962 14:57:41 -- target/shutdown.sh@57 -- # local ret=1 00:20:58.962 14:57:41 -- target/shutdown.sh@58 -- # local i 00:20:58.962 14:57:41 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:58.962 14:57:41 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:58.962 14:57:41 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:58.962 14:57:41 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:58.962 14:57:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.962 14:57:41 -- common/autotest_common.sh@10 -- # set +x 00:20:58.962 14:57:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.962 14:57:41 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:58.962 14:57:41 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:58.962 14:57:41 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:59.221 14:57:41 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:59.221 14:57:41 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:59.221 14:57:41 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:59.221 14:57:41 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:59.221 14:57:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.221 14:57:41 -- common/autotest_common.sh@10 -- # set +x 00:20:59.221 14:57:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.221 14:57:41 -- target/shutdown.sh@60 -- # read_io_count=71 00:20:59.221 14:57:41 -- target/shutdown.sh@63 -- # '[' 71 -ge 100 ']' 00:20:59.221 14:57:41 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:59.481 14:57:41 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:59.481 14:57:41 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:59.481 14:57:41 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:59.481 14:57:41 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:59.481 14:57:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.481 14:57:41 -- common/autotest_common.sh@10 -- # set +x 00:20:59.481 14:57:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.481 14:57:41 -- target/shutdown.sh@60 -- # read_io_count=135 00:20:59.481 14:57:41 -- target/shutdown.sh@63 -- # '[' 135 -ge 100 ']' 00:20:59.481 14:57:41 -- target/shutdown.sh@64 -- # ret=0 00:20:59.481 14:57:41 -- target/shutdown.sh@65 -- # break 00:20:59.481 14:57:41 -- target/shutdown.sh@69 -- # return 0 00:20:59.481 14:57:41 -- target/shutdown.sh@110 -- # killprocess 1125007 00:20:59.481 14:57:41 -- common/autotest_common.sh@936 -- # '[' -z 1125007 ']' 00:20:59.482 14:57:41 -- common/autotest_common.sh@940 -- # kill -0 1125007 00:20:59.482 14:57:41 -- common/autotest_common.sh@941 -- # uname 00:20:59.482 14:57:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:59.482 14:57:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1125007 00:20:59.482 14:57:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:59.482 14:57:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:59.482 14:57:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1125007' 00:20:59.482 killing process with pid 1125007 00:20:59.482 14:57:42 -- common/autotest_common.sh@955 -- # kill 1125007 00:20:59.482 14:57:42 -- common/autotest_common.sh@960 -- # wait 1125007 00:20:59.482 Received shutdown signal, test time was about 0.952540 seconds 00:20:59.482 00:20:59.482 Latency(us) 00:20:59.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.482 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.482 Verification LBA range: start 0x0 length 0x400 00:20:59.482 Nvme1n1 : 0.92 218.57 13.66 0.00 0.00 287692.61 3167.57 230686.72 00:20:59.482 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.482 Verification LBA range: start 0x0 length 0x400 00:20:59.482 Nvme2n1 : 0.91 211.05 13.19 0.00 0.00 293346.99 19333.12 248162.99 00:20:59.482 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.482 Verification LBA range: start 0x0 length 0x400 00:20:59.482 Nvme3n1 : 0.95 270.52 16.91 0.00 0.00 224312.11 27197.44 239424.85 00:20:59.482 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.482 Verification LBA range: start 0x0 length 0x400 00:20:59.482 Nvme4n1 : 0.95 266.91 16.68 0.00 0.00 222322.28 16930.13 248162.99 00:20:59.482 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.482 Verification LBA range: start 0x0 length 0x400 00:20:59.482 Nvme5n1 : 0.93 207.40 12.96 0.00 0.00 279647.86 23483.73 251658.24 00:20:59.482 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.482 Verification LBA range: start 0x0 length 0x400 00:20:59.482 Nvme6n1 : 0.94 276.52 17.28 0.00 0.00 205017.93 2607.79 251658.24 00:20:59.482 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.482 Verification LBA range: start 0x0 length 0x400 00:20:59.482 Nvme7n1 : 0.94 273.50 17.09 0.00 0.00 202609.92 13981.01 235929.60 00:20:59.482 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.482 Verification LBA range: start 0x0 length 0x400 00:20:59.482 Nvme8n1 : 0.94 271.41 16.96 0.00 0.00 199841.81 10758.83 249910.61 00:20:59.482 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.482 Verification LBA range: start 0x0 length 0x400 00:20:59.482 Nvme9n1 : 0.93 205.57 12.85 0.00 0.00 257139.77 25449.81 272629.76 00:20:59.482 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.482 Verification LBA range: start 0x0 length 0x400 00:20:59.482 Nvme10n1 : 0.93 206.65 12.92 0.00 0.00 249420.52 18568.53 249910.61 00:20:59.482 =================================================================================================================== 00:20:59.482 Total : 2408.08 150.51 0.00 0.00 237817.35 2607.79 272629.76 00:20:59.743 14:57:42 -- target/shutdown.sh@113 -- # sleep 1 00:21:00.684 14:57:43 -- target/shutdown.sh@114 -- # kill -0 1124803 00:21:00.684 14:57:43 -- target/shutdown.sh@116 -- # stoptarget 00:21:00.684 14:57:43 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:00.684 14:57:43 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:00.684 14:57:43 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:00.684 14:57:43 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:00.684 14:57:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:00.684 14:57:43 -- nvmf/common.sh@117 -- # sync 00:21:00.684 14:57:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:00.684 14:57:43 -- nvmf/common.sh@120 -- # set +e 00:21:00.684 14:57:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:00.684 14:57:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:00.684 rmmod nvme_tcp 00:21:00.684 rmmod nvme_fabrics 00:21:00.684 rmmod nvme_keyring 00:21:00.945 14:57:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:00.945 14:57:43 -- nvmf/common.sh@124 -- # set -e 00:21:00.945 14:57:43 -- nvmf/common.sh@125 -- # return 0 00:21:00.945 14:57:43 -- nvmf/common.sh@478 -- # '[' -n 1124803 ']' 00:21:00.945 14:57:43 -- nvmf/common.sh@479 -- # killprocess 1124803 00:21:00.945 14:57:43 -- common/autotest_common.sh@936 -- # '[' -z 1124803 ']' 00:21:00.945 14:57:43 -- common/autotest_common.sh@940 -- # kill -0 1124803 00:21:00.945 14:57:43 -- common/autotest_common.sh@941 -- # uname 00:21:00.945 14:57:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:00.945 14:57:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1124803 00:21:00.945 14:57:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:00.945 14:57:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:00.945 14:57:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1124803' 00:21:00.945 killing process with pid 1124803 00:21:00.945 14:57:43 -- common/autotest_common.sh@955 -- # kill 1124803 00:21:00.945 14:57:43 -- common/autotest_common.sh@960 -- # wait 1124803 00:21:01.204 14:57:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:01.205 14:57:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:01.205 14:57:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:01.205 14:57:43 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:01.205 14:57:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:01.205 14:57:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.205 14:57:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:01.205 14:57:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.118 14:57:45 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:03.118 00:21:03.118 real 0m8.103s 00:21:03.118 user 0m24.653s 00:21:03.118 sys 0m1.236s 00:21:03.118 14:57:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:03.118 14:57:45 -- common/autotest_common.sh@10 -- # set +x 00:21:03.118 ************************************ 00:21:03.118 END TEST nvmf_shutdown_tc2 00:21:03.118 ************************************ 00:21:03.118 14:57:45 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:03.118 14:57:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:03.118 14:57:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:03.118 14:57:45 -- common/autotest_common.sh@10 -- # set +x 00:21:03.381 ************************************ 00:21:03.381 START TEST nvmf_shutdown_tc3 00:21:03.381 ************************************ 00:21:03.381 14:57:45 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:21:03.381 14:57:45 -- target/shutdown.sh@121 -- # starttarget 00:21:03.381 14:57:45 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:03.381 14:57:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:03.381 14:57:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:03.381 14:57:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:03.381 14:57:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:03.381 14:57:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:03.381 14:57:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.381 14:57:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:03.381 14:57:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.381 14:57:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:03.381 14:57:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:03.381 14:57:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:03.381 14:57:45 -- common/autotest_common.sh@10 -- # set +x 00:21:03.381 14:57:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:03.381 14:57:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:03.381 14:57:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:03.381 14:57:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:03.381 14:57:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:03.381 14:57:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:03.381 14:57:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:03.381 14:57:45 -- nvmf/common.sh@295 -- # net_devs=() 00:21:03.381 14:57:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:03.381 14:57:45 -- nvmf/common.sh@296 -- # e810=() 00:21:03.381 14:57:45 -- nvmf/common.sh@296 -- # local -ga e810 00:21:03.381 14:57:45 -- nvmf/common.sh@297 -- # x722=() 00:21:03.381 14:57:45 -- nvmf/common.sh@297 -- # local -ga x722 00:21:03.381 14:57:45 -- nvmf/common.sh@298 -- # mlx=() 00:21:03.381 14:57:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:03.381 14:57:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:03.381 14:57:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:03.381 14:57:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:03.381 14:57:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:03.381 14:57:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:03.381 14:57:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:03.381 14:57:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:03.381 14:57:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:03.381 14:57:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:03.381 14:57:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:03.381 14:57:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:03.381 14:57:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:03.381 14:57:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:03.381 14:57:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:03.381 14:57:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:03.381 14:57:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:03.381 14:57:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:03.381 14:57:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:03.381 14:57:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:03.381 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:03.381 14:57:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:03.381 14:57:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:03.381 14:57:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.381 14:57:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.381 14:57:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:03.381 14:57:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:03.381 14:57:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:03.381 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:03.381 14:57:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:03.381 14:57:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:03.381 14:57:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.381 14:57:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.381 14:57:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:03.381 14:57:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:03.381 14:57:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:03.381 14:57:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:03.381 14:57:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:03.381 14:57:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.381 14:57:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:03.381 14:57:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.381 14:57:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:03.381 Found net devices under 0000:31:00.0: cvl_0_0 00:21:03.381 14:57:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.381 14:57:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:03.381 14:57:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.381 14:57:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:03.381 14:57:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.381 14:57:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:03.381 Found net devices under 0000:31:00.1: cvl_0_1 00:21:03.381 14:57:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.381 14:57:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:03.381 14:57:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:03.381 14:57:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:03.381 14:57:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:03.381 14:57:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:03.381 14:57:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:03.381 14:57:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:03.381 14:57:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:03.381 14:57:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:03.381 14:57:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:03.381 14:57:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:03.381 14:57:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:03.381 14:57:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:03.381 14:57:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:03.381 14:57:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:03.381 14:57:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:03.381 14:57:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:03.381 14:57:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:03.682 14:57:46 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:03.682 14:57:46 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:03.682 14:57:46 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:03.682 14:57:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:03.682 14:57:46 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:03.682 14:57:46 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:03.682 14:57:46 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:03.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:03.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:21:03.682 00:21:03.682 --- 10.0.0.2 ping statistics --- 00:21:03.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.682 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:21:03.682 14:57:46 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:03.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:03.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:21:03.682 00:21:03.682 --- 10.0.0.1 ping statistics --- 00:21:03.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.682 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:21:03.682 14:57:46 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:03.682 14:57:46 -- nvmf/common.sh@411 -- # return 0 00:21:03.682 14:57:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:03.682 14:57:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:03.682 14:57:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:03.682 14:57:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:03.682 14:57:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:03.682 14:57:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:03.682 14:57:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:03.682 14:57:46 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:03.682 14:57:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:03.682 14:57:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:03.682 14:57:46 -- common/autotest_common.sh@10 -- # set +x 00:21:03.682 14:57:46 -- nvmf/common.sh@470 -- # nvmfpid=1126488 00:21:03.682 14:57:46 -- nvmf/common.sh@471 -- # waitforlisten 1126488 00:21:03.682 14:57:46 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:03.682 14:57:46 -- common/autotest_common.sh@817 -- # '[' -z 1126488 ']' 00:21:03.682 14:57:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.682 14:57:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:03.682 14:57:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.682 14:57:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:03.682 14:57:46 -- common/autotest_common.sh@10 -- # set +x 00:21:03.976 [2024-04-26 14:57:46.369106] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:03.976 [2024-04-26 14:57:46.369169] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.976 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.976 [2024-04-26 14:57:46.453261] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:03.976 [2024-04-26 14:57:46.508262] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.976 [2024-04-26 14:57:46.508294] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.977 [2024-04-26 14:57:46.508299] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:03.977 [2024-04-26 14:57:46.508304] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:03.977 [2024-04-26 14:57:46.508308] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.977 [2024-04-26 14:57:46.508428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.977 [2024-04-26 14:57:46.508583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:03.977 [2024-04-26 14:57:46.508739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.977 [2024-04-26 14:57:46.508742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:04.546 14:57:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:04.546 14:57:47 -- common/autotest_common.sh@850 -- # return 0 00:21:04.546 14:57:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:04.546 14:57:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:04.546 14:57:47 -- common/autotest_common.sh@10 -- # set +x 00:21:04.546 14:57:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.546 14:57:47 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:04.546 14:57:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:04.546 14:57:47 -- common/autotest_common.sh@10 -- # set +x 00:21:04.546 [2024-04-26 14:57:47.172961] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.546 14:57:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:04.546 14:57:47 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:04.546 14:57:47 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:04.546 14:57:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:04.546 14:57:47 -- common/autotest_common.sh@10 -- # set +x 00:21:04.546 14:57:47 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:04.546 14:57:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:04.546 14:57:47 -- target/shutdown.sh@28 -- # cat 00:21:04.546 14:57:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:04.546 14:57:47 -- target/shutdown.sh@28 -- # cat 00:21:04.546 14:57:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:04.546 14:57:47 -- target/shutdown.sh@28 -- # cat 00:21:04.546 14:57:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:04.546 14:57:47 -- target/shutdown.sh@28 -- # cat 00:21:04.546 14:57:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:04.546 14:57:47 -- target/shutdown.sh@28 -- # cat 00:21:04.546 14:57:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:04.546 14:57:47 -- target/shutdown.sh@28 -- # cat 00:21:04.805 14:57:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:04.805 14:57:47 -- target/shutdown.sh@28 -- # cat 00:21:04.805 14:57:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:04.805 14:57:47 -- target/shutdown.sh@28 -- # cat 00:21:04.805 14:57:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:04.805 14:57:47 -- target/shutdown.sh@28 -- # cat 00:21:04.805 14:57:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:04.805 14:57:47 -- target/shutdown.sh@28 -- # cat 00:21:04.805 14:57:47 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:04.805 14:57:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:04.805 14:57:47 -- common/autotest_common.sh@10 -- # set +x 00:21:04.805 Malloc1 00:21:04.805 [2024-04-26 14:57:47.267764] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.805 Malloc2 00:21:04.805 Malloc3 00:21:04.805 Malloc4 00:21:04.805 Malloc5 00:21:04.805 Malloc6 00:21:05.065 Malloc7 00:21:05.065 Malloc8 00:21:05.065 Malloc9 00:21:05.065 Malloc10 00:21:05.065 14:57:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:05.065 14:57:47 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:05.065 14:57:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:05.065 14:57:47 -- common/autotest_common.sh@10 -- # set +x 00:21:05.065 14:57:47 -- target/shutdown.sh@125 -- # perfpid=1126845 00:21:05.065 14:57:47 -- target/shutdown.sh@126 -- # waitforlisten 1126845 /var/tmp/bdevperf.sock 00:21:05.065 14:57:47 -- common/autotest_common.sh@817 -- # '[' -z 1126845 ']' 00:21:05.065 14:57:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:05.065 14:57:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:05.065 14:57:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:05.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:05.065 14:57:47 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:05.065 14:57:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:05.065 14:57:47 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:05.065 14:57:47 -- common/autotest_common.sh@10 -- # set +x 00:21:05.065 14:57:47 -- nvmf/common.sh@521 -- # config=() 00:21:05.065 14:57:47 -- nvmf/common.sh@521 -- # local subsystem config 00:21:05.065 14:57:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:05.065 14:57:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:05.065 { 00:21:05.065 "params": { 00:21:05.065 "name": "Nvme$subsystem", 00:21:05.065 "trtype": "$TEST_TRANSPORT", 00:21:05.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.065 "adrfam": "ipv4", 00:21:05.065 "trsvcid": "$NVMF_PORT", 00:21:05.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.065 "hdgst": ${hdgst:-false}, 00:21:05.065 "ddgst": ${ddgst:-false} 00:21:05.065 }, 00:21:05.065 "method": "bdev_nvme_attach_controller" 00:21:05.065 } 00:21:05.065 EOF 00:21:05.065 )") 00:21:05.065 14:57:47 -- nvmf/common.sh@543 -- # cat 00:21:05.065 14:57:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:05.065 14:57:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:05.065 { 00:21:05.065 "params": { 00:21:05.065 "name": "Nvme$subsystem", 00:21:05.065 "trtype": "$TEST_TRANSPORT", 00:21:05.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.065 "adrfam": "ipv4", 00:21:05.065 "trsvcid": "$NVMF_PORT", 00:21:05.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.065 "hdgst": ${hdgst:-false}, 00:21:05.065 "ddgst": ${ddgst:-false} 00:21:05.065 }, 00:21:05.065 "method": "bdev_nvme_attach_controller" 00:21:05.065 } 00:21:05.065 EOF 00:21:05.065 )") 00:21:05.065 14:57:47 -- nvmf/common.sh@543 -- # cat 00:21:05.065 14:57:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:05.065 14:57:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:05.065 { 00:21:05.065 "params": { 00:21:05.065 "name": "Nvme$subsystem", 00:21:05.065 "trtype": "$TEST_TRANSPORT", 00:21:05.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.065 "adrfam": "ipv4", 00:21:05.065 "trsvcid": "$NVMF_PORT", 00:21:05.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.065 "hdgst": ${hdgst:-false}, 00:21:05.065 "ddgst": ${ddgst:-false} 00:21:05.065 }, 00:21:05.065 "method": "bdev_nvme_attach_controller" 00:21:05.065 } 00:21:05.065 EOF 00:21:05.065 )") 00:21:05.065 14:57:47 -- nvmf/common.sh@543 -- # cat 00:21:05.065 14:57:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:05.065 14:57:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:05.065 { 00:21:05.065 "params": { 00:21:05.065 "name": "Nvme$subsystem", 00:21:05.065 "trtype": "$TEST_TRANSPORT", 00:21:05.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.065 "adrfam": "ipv4", 00:21:05.065 "trsvcid": "$NVMF_PORT", 00:21:05.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.065 "hdgst": ${hdgst:-false}, 00:21:05.065 "ddgst": ${ddgst:-false} 00:21:05.065 }, 00:21:05.065 "method": "bdev_nvme_attach_controller" 00:21:05.065 } 00:21:05.065 EOF 00:21:05.065 )") 00:21:05.065 14:57:47 -- nvmf/common.sh@543 -- # cat 00:21:05.065 14:57:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:05.065 14:57:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:05.065 { 00:21:05.065 "params": { 00:21:05.065 "name": "Nvme$subsystem", 00:21:05.065 "trtype": "$TEST_TRANSPORT", 00:21:05.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.065 "adrfam": "ipv4", 00:21:05.065 "trsvcid": "$NVMF_PORT", 00:21:05.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.065 "hdgst": ${hdgst:-false}, 00:21:05.065 "ddgst": ${ddgst:-false} 00:21:05.065 }, 00:21:05.065 "method": "bdev_nvme_attach_controller" 00:21:05.065 } 00:21:05.065 EOF 00:21:05.065 )") 00:21:05.065 14:57:47 -- nvmf/common.sh@543 -- # cat 00:21:05.065 14:57:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:05.065 14:57:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:05.065 { 00:21:05.065 "params": { 00:21:05.065 "name": "Nvme$subsystem", 00:21:05.065 "trtype": "$TEST_TRANSPORT", 00:21:05.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.065 "adrfam": "ipv4", 00:21:05.065 "trsvcid": "$NVMF_PORT", 00:21:05.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.065 "hdgst": ${hdgst:-false}, 00:21:05.065 "ddgst": ${ddgst:-false} 00:21:05.065 }, 00:21:05.065 "method": "bdev_nvme_attach_controller" 00:21:05.065 } 00:21:05.065 EOF 00:21:05.065 )") 00:21:05.065 14:57:47 -- nvmf/common.sh@543 -- # cat 00:21:05.065 [2024-04-26 14:57:47.708346] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:05.065 [2024-04-26 14:57:47.708398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1126845 ] 00:21:05.065 14:57:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:05.065 14:57:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:05.065 { 00:21:05.065 "params": { 00:21:05.065 "name": "Nvme$subsystem", 00:21:05.065 "trtype": "$TEST_TRANSPORT", 00:21:05.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.065 "adrfam": "ipv4", 00:21:05.065 "trsvcid": "$NVMF_PORT", 00:21:05.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.065 "hdgst": ${hdgst:-false}, 00:21:05.065 "ddgst": ${ddgst:-false} 00:21:05.065 }, 00:21:05.065 "method": "bdev_nvme_attach_controller" 00:21:05.065 } 00:21:05.065 EOF 00:21:05.065 )") 00:21:05.065 14:57:47 -- nvmf/common.sh@543 -- # cat 00:21:05.065 14:57:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:05.065 14:57:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:05.065 { 00:21:05.065 "params": { 00:21:05.065 "name": "Nvme$subsystem", 00:21:05.065 "trtype": "$TEST_TRANSPORT", 00:21:05.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.065 "adrfam": "ipv4", 00:21:05.065 "trsvcid": "$NVMF_PORT", 00:21:05.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.065 "hdgst": ${hdgst:-false}, 00:21:05.065 "ddgst": ${ddgst:-false} 00:21:05.065 }, 00:21:05.065 "method": "bdev_nvme_attach_controller" 00:21:05.065 } 00:21:05.065 EOF 00:21:05.065 )") 00:21:05.065 14:57:47 -- nvmf/common.sh@543 -- # cat 00:21:05.065 14:57:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:05.065 14:57:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:05.065 { 00:21:05.065 "params": { 00:21:05.065 "name": "Nvme$subsystem", 00:21:05.065 "trtype": "$TEST_TRANSPORT", 00:21:05.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.065 "adrfam": "ipv4", 00:21:05.065 "trsvcid": "$NVMF_PORT", 00:21:05.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.065 "hdgst": ${hdgst:-false}, 00:21:05.065 "ddgst": ${ddgst:-false} 00:21:05.065 }, 00:21:05.065 "method": "bdev_nvme_attach_controller" 00:21:05.065 } 00:21:05.065 EOF 00:21:05.065 )") 00:21:05.065 14:57:47 -- nvmf/common.sh@543 -- # cat 00:21:05.325 14:57:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:05.325 14:57:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:05.325 { 00:21:05.325 "params": { 00:21:05.325 "name": "Nvme$subsystem", 00:21:05.325 "trtype": "$TEST_TRANSPORT", 00:21:05.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.325 "adrfam": "ipv4", 00:21:05.325 "trsvcid": "$NVMF_PORT", 00:21:05.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.325 "hdgst": ${hdgst:-false}, 00:21:05.325 "ddgst": ${ddgst:-false} 00:21:05.325 }, 00:21:05.325 "method": "bdev_nvme_attach_controller" 00:21:05.325 } 00:21:05.325 EOF 00:21:05.325 )") 00:21:05.325 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.325 14:57:47 -- nvmf/common.sh@543 -- # cat 00:21:05.325 14:57:47 -- nvmf/common.sh@545 -- # jq . 00:21:05.325 14:57:47 -- nvmf/common.sh@546 -- # IFS=, 00:21:05.325 14:57:47 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:05.325 "params": { 00:21:05.325 "name": "Nvme1", 00:21:05.325 "trtype": "tcp", 00:21:05.325 "traddr": "10.0.0.2", 00:21:05.325 "adrfam": "ipv4", 00:21:05.325 "trsvcid": "4420", 00:21:05.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.325 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:05.325 "hdgst": false, 00:21:05.325 "ddgst": false 00:21:05.325 }, 00:21:05.325 "method": "bdev_nvme_attach_controller" 00:21:05.325 },{ 00:21:05.325 "params": { 00:21:05.325 "name": "Nvme2", 00:21:05.325 "trtype": "tcp", 00:21:05.325 "traddr": "10.0.0.2", 00:21:05.325 "adrfam": "ipv4", 00:21:05.325 "trsvcid": "4420", 00:21:05.325 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:05.325 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:05.325 "hdgst": false, 00:21:05.325 "ddgst": false 00:21:05.325 }, 00:21:05.325 "method": "bdev_nvme_attach_controller" 00:21:05.325 },{ 00:21:05.325 "params": { 00:21:05.325 "name": "Nvme3", 00:21:05.325 "trtype": "tcp", 00:21:05.325 "traddr": "10.0.0.2", 00:21:05.325 "adrfam": "ipv4", 00:21:05.325 "trsvcid": "4420", 00:21:05.325 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:05.325 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:05.325 "hdgst": false, 00:21:05.325 "ddgst": false 00:21:05.325 }, 00:21:05.325 "method": "bdev_nvme_attach_controller" 00:21:05.325 },{ 00:21:05.325 "params": { 00:21:05.325 "name": "Nvme4", 00:21:05.325 "trtype": "tcp", 00:21:05.325 "traddr": "10.0.0.2", 00:21:05.325 "adrfam": "ipv4", 00:21:05.325 "trsvcid": "4420", 00:21:05.325 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:05.325 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:05.325 "hdgst": false, 00:21:05.325 "ddgst": false 00:21:05.325 }, 00:21:05.325 "method": "bdev_nvme_attach_controller" 00:21:05.325 },{ 00:21:05.325 "params": { 00:21:05.325 "name": "Nvme5", 00:21:05.325 "trtype": "tcp", 00:21:05.325 "traddr": "10.0.0.2", 00:21:05.325 "adrfam": "ipv4", 00:21:05.325 "trsvcid": "4420", 00:21:05.325 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:05.325 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:05.325 "hdgst": false, 00:21:05.325 "ddgst": false 00:21:05.325 }, 00:21:05.325 "method": "bdev_nvme_attach_controller" 00:21:05.325 },{ 00:21:05.325 "params": { 00:21:05.325 "name": "Nvme6", 00:21:05.325 "trtype": "tcp", 00:21:05.325 "traddr": "10.0.0.2", 00:21:05.325 "adrfam": "ipv4", 00:21:05.325 "trsvcid": "4420", 00:21:05.325 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:05.325 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:05.325 "hdgst": false, 00:21:05.325 "ddgst": false 00:21:05.325 }, 00:21:05.325 "method": "bdev_nvme_attach_controller" 00:21:05.325 },{ 00:21:05.325 "params": { 00:21:05.325 "name": "Nvme7", 00:21:05.325 "trtype": "tcp", 00:21:05.325 "traddr": "10.0.0.2", 00:21:05.325 "adrfam": "ipv4", 00:21:05.325 "trsvcid": "4420", 00:21:05.325 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:05.325 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:05.325 "hdgst": false, 00:21:05.325 "ddgst": false 00:21:05.325 }, 00:21:05.325 "method": "bdev_nvme_attach_controller" 00:21:05.325 },{ 00:21:05.325 "params": { 00:21:05.325 "name": "Nvme8", 00:21:05.325 "trtype": "tcp", 00:21:05.325 "traddr": "10.0.0.2", 00:21:05.325 "adrfam": "ipv4", 00:21:05.325 "trsvcid": "4420", 00:21:05.325 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:05.325 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:05.325 "hdgst": false, 00:21:05.325 "ddgst": false 00:21:05.325 }, 00:21:05.325 "method": "bdev_nvme_attach_controller" 00:21:05.325 },{ 00:21:05.325 "params": { 00:21:05.325 "name": "Nvme9", 00:21:05.325 "trtype": "tcp", 00:21:05.325 "traddr": "10.0.0.2", 00:21:05.325 "adrfam": "ipv4", 00:21:05.325 "trsvcid": "4420", 00:21:05.325 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:05.325 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:05.325 "hdgst": false, 00:21:05.325 "ddgst": false 00:21:05.325 }, 00:21:05.325 "method": "bdev_nvme_attach_controller" 00:21:05.325 },{ 00:21:05.325 "params": { 00:21:05.325 "name": "Nvme10", 00:21:05.325 "trtype": "tcp", 00:21:05.325 "traddr": "10.0.0.2", 00:21:05.325 "adrfam": "ipv4", 00:21:05.325 "trsvcid": "4420", 00:21:05.325 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:05.325 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:05.325 "hdgst": false, 00:21:05.325 "ddgst": false 00:21:05.325 }, 00:21:05.325 "method": "bdev_nvme_attach_controller" 00:21:05.325 }' 00:21:05.325 [2024-04-26 14:57:47.769081] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.325 [2024-04-26 14:57:47.831966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.702 Running I/O for 10 seconds... 00:21:06.962 14:57:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:06.962 14:57:49 -- common/autotest_common.sh@850 -- # return 0 00:21:06.962 14:57:49 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:06.962 14:57:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:06.962 14:57:49 -- common/autotest_common.sh@10 -- # set +x 00:21:06.962 14:57:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:06.962 14:57:49 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:06.962 14:57:49 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:06.962 14:57:49 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:06.962 14:57:49 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:06.962 14:57:49 -- target/shutdown.sh@57 -- # local ret=1 00:21:06.962 14:57:49 -- target/shutdown.sh@58 -- # local i 00:21:06.962 14:57:49 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:06.962 14:57:49 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:06.962 14:57:49 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:06.962 14:57:49 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:06.962 14:57:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:06.962 14:57:49 -- common/autotest_common.sh@10 -- # set +x 00:21:06.962 14:57:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:06.962 14:57:49 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:06.962 14:57:49 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:06.962 14:57:49 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:07.221 14:57:49 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:07.221 14:57:49 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:07.222 14:57:49 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:07.222 14:57:49 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:07.222 14:57:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.222 14:57:49 -- common/autotest_common.sh@10 -- # set +x 00:21:07.480 14:57:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.480 14:57:49 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:07.480 14:57:49 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:07.480 14:57:49 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:07.758 14:57:50 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:07.758 14:57:50 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:07.758 14:57:50 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:07.758 14:57:50 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:07.758 14:57:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.758 14:57:50 -- common/autotest_common.sh@10 -- # set +x 00:21:07.758 14:57:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.758 14:57:50 -- target/shutdown.sh@60 -- # read_io_count=131 00:21:07.758 14:57:50 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:21:07.758 14:57:50 -- target/shutdown.sh@64 -- # ret=0 00:21:07.758 14:57:50 -- target/shutdown.sh@65 -- # break 00:21:07.758 14:57:50 -- target/shutdown.sh@69 -- # return 0 00:21:07.758 14:57:50 -- target/shutdown.sh@135 -- # killprocess 1126488 00:21:07.758 14:57:50 -- common/autotest_common.sh@936 -- # '[' -z 1126488 ']' 00:21:07.758 14:57:50 -- common/autotest_common.sh@940 -- # kill -0 1126488 00:21:07.758 14:57:50 -- common/autotest_common.sh@941 -- # uname 00:21:07.758 14:57:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:07.758 14:57:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1126488 00:21:07.758 14:57:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:07.758 14:57:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:07.758 14:57:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1126488' 00:21:07.758 killing process with pid 1126488 00:21:07.758 14:57:50 -- common/autotest_common.sh@955 -- # kill 1126488 00:21:07.758 14:57:50 -- common/autotest_common.sh@960 -- # wait 1126488 00:21:07.758 [2024-04-26 14:57:50.258232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.758 [2024-04-26 14:57:50.258279] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.758 [2024-04-26 14:57:50.258285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.758 [2024-04-26 14:57:50.258290] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.758 [2024-04-26 14:57:50.258295] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.758 [2024-04-26 14:57:50.258300] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.758 [2024-04-26 14:57:50.258305] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.758 [2024-04-26 14:57:50.258309] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.758 [2024-04-26 14:57:50.258314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.758 [2024-04-26 14:57:50.258319] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.758 [2024-04-26 14:57:50.258323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.758 [2024-04-26 14:57:50.258328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.758 [2024-04-26 14:57:50.258332] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258337] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258341] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258351] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258356] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258361] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258366] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258375] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258379] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258397] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258407] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258411] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258416] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258420] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258425] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258429] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258434] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258438] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258443] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258447] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258456] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258461] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258465] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258470] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258475] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258484] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258488] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258497] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258501] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258506] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258510] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258515] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258520] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258525] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258529] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258534] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258547] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258552] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258556] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258561] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258565] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258569] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.258574] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8a60 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259450] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259472] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259483] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259488] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259498] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259503] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259508] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259512] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259517] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259521] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259526] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259531] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259543] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259548] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259552] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259557] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259566] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259571] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259576] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259580] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259585] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259589] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259594] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259599] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259603] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259608] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259613] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259617] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.759 [2024-04-26 14:57:50.259621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259630] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259634] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259639] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259644] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259648] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259653] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259657] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259663] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259676] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259681] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259703] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259707] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259725] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259734] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259743] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259747] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259751] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.259756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ab390 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261450] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261455] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261460] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261464] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261473] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261477] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261482] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261486] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261491] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261495] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261500] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261504] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261509] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261513] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261522] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261527] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261532] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261536] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261540] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261545] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261549] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261554] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261558] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261563] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261568] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261572] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261576] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261581] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261585] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261590] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261595] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261600] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261605] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261609] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261614] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261619] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261623] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261628] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261632] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261637] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261642] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261646] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261653] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261658] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261662] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261676] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261681] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261695] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.760 [2024-04-26 14:57:50.261700] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.261704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.261709] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.261713] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.261718] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.261723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.261727] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.261732] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.261737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9380 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262411] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9810 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262681] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262695] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262709] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262714] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262718] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262727] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262732] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262736] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262740] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262745] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262749] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262763] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262771] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262779] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262783] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262788] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262801] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262806] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262811] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262815] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262819] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262824] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262833] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262845] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262850] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262855] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262864] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262868] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262872] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262881] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262886] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262895] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262899] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262923] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262932] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262936] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.262963] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a9ca0 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.263602] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.263616] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.263622] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.263626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.263631] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.263636] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.263640] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.263645] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.263649] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.263654] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.263658] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.761 [2024-04-26 14:57:50.263662] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263679] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263683] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263696] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263701] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263705] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263710] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263714] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263719] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263732] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263745] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263749] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263763] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263772] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263776] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263785] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263789] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263794] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263800] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263810] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263819] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263832] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263853] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263858] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263862] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263880] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263885] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263889] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.263897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa150 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264644] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264659] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264664] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264668] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264689] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264703] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264713] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264717] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264722] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264726] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264731] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264735] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264740] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264744] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264749] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264764] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264769] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264774] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264778] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264783] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264796] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264801] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.762 [2024-04-26 14:57:50.264807] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264817] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264821] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264826] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264830] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264854] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264858] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264863] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264868] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264873] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264882] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264887] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264891] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264896] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264900] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264905] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264910] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264923] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264928] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264932] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264938] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.264953] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aa5e0 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265713] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265732] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265743] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265747] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265752] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265761] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265770] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265774] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265779] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265783] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265788] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265801] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265806] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265810] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265815] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265819] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265824] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265836] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265846] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265850] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265855] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265864] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265868] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265873] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265878] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265887] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265896] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265901] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265905] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.763 [2024-04-26 14:57:50.265914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.764 [2024-04-26 14:57:50.265919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.764 [2024-04-26 14:57:50.265923] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.764 [2024-04-26 14:57:50.265928] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.764 [2024-04-26 14:57:50.265932] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.764 [2024-04-26 14:57:50.265937] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.764 [2024-04-26 14:57:50.265942] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.764 [2024-04-26 14:57:50.265946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.764 [2024-04-26 14:57:50.265950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.764 [2024-04-26 14:57:50.265955] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.764 [2024-04-26 14:57:50.265960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.764 [2024-04-26 14:57:50.265965] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.764 [2024-04-26 14:57:50.267149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.764 [2024-04-26 14:57:50.267735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.764 [2024-04-26 14:57:50.267742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.267751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.267758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.267767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.267774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.267784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.267791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.267801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.267808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.267818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.267825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.267834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.267848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.267857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.267865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.267874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.267881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.267890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.267897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.267906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.267913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.267922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.267929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.267938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.267945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.267954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.267961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.267970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.267977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.267986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.267994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.268003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.268012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.268021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.268028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.268038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.268045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.268054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.268061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.268070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.268077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.268087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.268094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.268103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.268109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.268118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.268126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.268135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.268142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.268151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.268158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.268167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.268175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.268184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.268191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.268200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.268207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.268218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.268225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.268234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.765 [2024-04-26 14:57:50.268242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.268269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:07.765 [2024-04-26 14:57:50.268311] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2159460 was disconnected and freed. reset controller. 00:21:07.765 [2024-04-26 14:57:50.268444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.765 [2024-04-26 14:57:50.268459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.268467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.765 [2024-04-26 14:57:50.268474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.268482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.765 [2024-04-26 14:57:50.268489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.268497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.765 [2024-04-26 14:57:50.268505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.268512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:21:07.765 [2024-04-26 14:57:50.268545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.765 [2024-04-26 14:57:50.268554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.268562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.765 [2024-04-26 14:57:50.268569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.268576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.765 [2024-04-26 14:57:50.268584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.268593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.765 [2024-04-26 14:57:50.268600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.765 [2024-04-26 14:57:50.268606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23000f0 is same with the state(5) to be set 00:21:07.766 [2024-04-26 14:57:50.268625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.268636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.268644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.268651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.268659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.268666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.268674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.268681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.268688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6c10 is same with the state(5) to be set 00:21:07.766 [2024-04-26 14:57:50.268711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.268720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.268728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.268735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.268743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.268750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.268758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.268765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.268772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c8f0 is same with the state(5) to be set 00:21:07.766 [2024-04-26 14:57:50.268794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.268802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.268810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.268817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.268825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.268832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.268846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.268854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.268860] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbefd0 is same with the state(5) to be set 00:21:07.766 [2024-04-26 14:57:50.268884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.268892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.268900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.268907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.268915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.268922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.268930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.268937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.268943] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2160e30 is same with the state(5) to be set 00:21:07.766 [2024-04-26 14:57:50.268965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.268973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.268981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.268988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.268996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.269003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.269011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.269018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.269024] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2181790 is same with the state(5) to be set 00:21:07.766 [2024-04-26 14:57:50.269047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.269056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.269064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.269071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.269079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.269086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.269094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.269103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.269110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2242a30 is same with the state(5) to be set 00:21:07.766 [2024-04-26 14:57:50.269135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.269144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.269156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.269166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.269173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.269180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.269188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.766 [2024-04-26 14:57:50.269195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.269202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215e0c0 is same with the state(5) to be set 00:21:07.766 [2024-04-26 14:57:50.269687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.766 [2024-04-26 14:57:50.269706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.269717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.766 [2024-04-26 14:57:50.269725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.269734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.766 [2024-04-26 14:57:50.269741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.269750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.766 [2024-04-26 14:57:50.269757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.269766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.766 [2024-04-26 14:57:50.269774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.269783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.766 [2024-04-26 14:57:50.269790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.269798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.766 [2024-04-26 14:57:50.269805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.269815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.766 [2024-04-26 14:57:50.269825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.766 [2024-04-26 14:57:50.269835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.269854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.269866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.269876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.269890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.269898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.269907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.269914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.269923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.269931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.269940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.269947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.269956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.269963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.269972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.269979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.269988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.269995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.270004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.270011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.270020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.270027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.270036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.270043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.270055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.270063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.270072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.270079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.270088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.270095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.270106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.270113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.270122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.270129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.270138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.270146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.270155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.270162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.270171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.270178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.270188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.270195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.270204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.270211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.270220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.270227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.270236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.270243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.270252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.270261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.270270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.270277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.274920] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.767 [2024-04-26 14:57:50.274940] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.767 [2024-04-26 14:57:50.274946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.767 [2024-04-26 14:57:50.274951] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.767 [2024-04-26 14:57:50.274957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.767 [2024-04-26 14:57:50.274962] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.767 [2024-04-26 14:57:50.274967] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.767 [2024-04-26 14:57:50.274971] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.767 [2024-04-26 14:57:50.274976] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.767 [2024-04-26 14:57:50.274980] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.767 [2024-04-26 14:57:50.274984] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aaf00 is same with the state(5) to be set 00:21:07.767 [2024-04-26 14:57:50.284351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.284383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.284396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.284404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.284414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.284421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.284430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.767 [2024-04-26 14:57:50.284438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.767 [2024-04-26 14:57:50.284447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.284455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.284465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.284472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.284486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.284494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.284503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.284510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.284519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.284526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.284536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.284543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.284554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.284563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.284572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.284580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.284589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.284596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.284606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.284613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.284622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.284630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.284639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.284646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.284655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.284662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.284672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.284679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.284688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.284697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.284707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.284714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.284723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.284730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.284740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.284747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.284756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.284763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.284773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.284780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.284789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.284796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.284805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.284812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.284821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.284829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.284844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.284852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.284861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.284868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.284877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.284884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.284929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:07.768 [2024-04-26 14:57:50.284975] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x221f140 was disconnected and freed. reset controller. 00:21:07.768 [2024-04-26 14:57:50.285085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.285098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.285111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.285119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.285128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.285136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.285146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.285153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.285162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.285169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.285178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.285186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.285194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.285202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.285211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.285218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.285227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.285234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.285244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.285251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.285260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.285267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.285276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.285283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.768 [2024-04-26 14:57:50.285292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.768 [2024-04-26 14:57:50.285302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.769 [2024-04-26 14:57:50.285949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.769 [2024-04-26 14:57:50.285956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.770 [2024-04-26 14:57:50.285966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.770 [2024-04-26 14:57:50.285973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.770 [2024-04-26 14:57:50.285982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.770 [2024-04-26 14:57:50.285989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.770 [2024-04-26 14:57:50.285998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.770 [2024-04-26 14:57:50.286006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.770 [2024-04-26 14:57:50.286014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.770 [2024-04-26 14:57:50.286022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.770 [2024-04-26 14:57:50.286031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.770 [2024-04-26 14:57:50.286039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.770 [2024-04-26 14:57:50.286048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.770 [2024-04-26 14:57:50.286055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.770 [2024-04-26 14:57:50.286064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.770 [2024-04-26 14:57:50.286071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.770 [2024-04-26 14:57:50.286080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.770 [2024-04-26 14:57:50.286087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.770 [2024-04-26 14:57:50.286096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.770 [2024-04-26 14:57:50.286103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.770 [2024-04-26 14:57:50.286112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.770 [2024-04-26 14:57:50.286120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.770 [2024-04-26 14:57:50.286129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.770 [2024-04-26 14:57:50.286138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.770 [2024-04-26 14:57:50.286147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.770 [2024-04-26 14:57:50.286154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.770 [2024-04-26 14:57:50.286206] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22f7eb0 was disconnected and freed. reset controller. 00:21:07.770 [2024-04-26 14:57:50.287677] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224d400 (9): Bad file descriptor 00:21:07.770 [2024-04-26 14:57:50.287728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.770 [2024-04-26 14:57:50.287739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.770 [2024-04-26 14:57:50.287750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.770 [2024-04-26 14:57:50.287760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.770 [2024-04-26 14:57:50.287769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.770 [2024-04-26 14:57:50.287779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.770 [2024-04-26 14:57:50.287788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.770 [2024-04-26 14:57:50.287795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.770 [2024-04-26 14:57:50.287803] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21807c0 is same with the state(5) to be set 00:21:07.770 [2024-04-26 14:57:50.287820] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23000f0 (9): Bad file descriptor 00:21:07.770 [2024-04-26 14:57:50.287833] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6c10 (9): Bad file descriptor 00:21:07.770 [2024-04-26 14:57:50.287852] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x216c8f0 (9): Bad file descriptor 00:21:07.770 [2024-04-26 14:57:50.287868] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbefd0 (9): Bad file descriptor 00:21:07.770 [2024-04-26 14:57:50.287881] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2160e30 (9): Bad file descriptor 00:21:07.770 [2024-04-26 14:57:50.287896] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2181790 (9): Bad file descriptor 00:21:07.770 [2024-04-26 14:57:50.287908] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2242a30 (9): Bad file descriptor 00:21:07.770 [2024-04-26 14:57:50.287920] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215e0c0 (9): Bad file descriptor 00:21:07.770 [2024-04-26 14:57:50.290569] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:07.770 [2024-04-26 14:57:50.290594] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:07.770 [2024-04-26 14:57:50.291188] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:07.770 [2024-04-26 14:57:50.291580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.770 [2024-04-26 14:57:50.291810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.770 [2024-04-26 14:57:50.291826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23000f0 with addr=10.0.0.2, port=4420 00:21:07.770 [2024-04-26 14:57:50.291835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23000f0 is same with the state(5) to be set 00:21:07.770 [2024-04-26 14:57:50.292172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.770 [2024-04-26 14:57:50.292592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.770 [2024-04-26 14:57:50.292606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cbefd0 with addr=10.0.0.2, port=4420 00:21:07.770 [2024-04-26 14:57:50.292616] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbefd0 is same with the state(5) to be set 00:21:07.770 [2024-04-26 14:57:50.293277] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.770 [2024-04-26 14:57:50.293326] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.770 [2024-04-26 14:57:50.293365] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.770 [2024-04-26 14:57:50.293452] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.770 [2024-04-26 14:57:50.293794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.770 [2024-04-26 14:57:50.294277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.770 [2024-04-26 14:57:50.294315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181790 with addr=10.0.0.2, port=4420 00:21:07.770 [2024-04-26 14:57:50.294327] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2181790 is same with the state(5) to be set 00:21:07.770 [2024-04-26 14:57:50.294342] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23000f0 (9): Bad file descriptor 00:21:07.770 [2024-04-26 14:57:50.294354] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbefd0 (9): Bad file descriptor 00:21:07.770 [2024-04-26 14:57:50.294421] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.770 [2024-04-26 14:57:50.294477] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.770 [2024-04-26 14:57:50.294564] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.770 [2024-04-26 14:57:50.294588] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2181790 (9): Bad file descriptor 00:21:07.770 [2024-04-26 14:57:50.294599] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:07.770 [2024-04-26 14:57:50.294606] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:07.770 [2024-04-26 14:57:50.294615] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:07.770 [2024-04-26 14:57:50.294629] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:07.770 [2024-04-26 14:57:50.294636] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:07.770 [2024-04-26 14:57:50.294643] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:07.770 [2024-04-26 14:57:50.294707] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:07.770 [2024-04-26 14:57:50.294715] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:07.770 [2024-04-26 14:57:50.294722] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:07.770 [2024-04-26 14:57:50.294728] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:07.770 [2024-04-26 14:57:50.294735] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:07.770 [2024-04-26 14:57:50.294775] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:07.770 [2024-04-26 14:57:50.297674] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21807c0 (9): Bad file descriptor 00:21:07.770 [2024-04-26 14:57:50.297811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.770 [2024-04-26 14:57:50.297824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.770 [2024-04-26 14:57:50.297847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.770 [2024-04-26 14:57:50.297856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.770 [2024-04-26 14:57:50.297866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.770 [2024-04-26 14:57:50.297873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.297884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.297892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.297901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.297909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.297919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.297926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.297936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.297943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.297953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.297961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.297970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.297978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.297987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.297995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.771 [2024-04-26 14:57:50.298539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.771 [2024-04-26 14:57:50.298549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.298555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.298565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.298572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.298581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.298588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.298597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.298605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.298614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.298621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.298630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.298637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.298647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.298654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.298663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.298670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.298681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.298689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.298698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.298705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.298714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.298721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.298731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.298738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.298747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.298754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.298763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.298770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.298780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.298787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.298796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.298803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.298813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.298821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.298830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.298841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.298850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.298857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.298866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.298874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.298883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.298892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.298901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.298908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.298917] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x221deb0 is same with the state(5) to be set 00:21:07.772 [2024-04-26 14:57:50.300201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.300215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.300228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.300237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.300248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.300256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.300267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.300276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.300288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.300296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.300308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.300316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.300325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.300332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.300342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.300349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.300358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.300365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.300375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.300382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.300391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.300401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.300411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.772 [2024-04-26 14:57:50.300418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.772 [2024-04-26 14:57:50.300427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.300986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.300995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.301002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.301012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.301019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.301030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.301037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.301046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.301053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.301062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.773 [2024-04-26 14:57:50.301070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.773 [2024-04-26 14:57:50.301079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.301086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.301095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.301102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.301111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.301118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.301127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.301134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.301144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.301151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.301160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.301168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.301177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.301184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.301193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.301201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.301210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.301217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.301227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.301238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.301248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.301255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.301264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.301272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.301281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.301288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.301296] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f6b90 is same with the state(5) to be set 00:21:07.774 [2024-04-26 14:57:50.302576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.302589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.302602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.302611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.302622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.302631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.302642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.302651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.302660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.302668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.302678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.302685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.302695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.302702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.302711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.302718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.302727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.302737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.302746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.302753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.302762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.302770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.302779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.302786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.302795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.302802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.302812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.302819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.302828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.302835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.302849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.302856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.302866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.302873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.302882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.302905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.302915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.302922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.302932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.302939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.302948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.302956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.302967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.302974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.302983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.302990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.303000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.303007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.303016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.303023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.303032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.774 [2024-04-26 14:57:50.303040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.774 [2024-04-26 14:57:50.303049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.303669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.775 [2024-04-26 14:57:50.303677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155670 is same with the state(5) to be set 00:21:07.775 [2024-04-26 14:57:50.304943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.775 [2024-04-26 14:57:50.304957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.304970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.304978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.304989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.304998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.776 [2024-04-26 14:57:50.305530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.776 [2024-04-26 14:57:50.305537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.305554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.305570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.305587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.305603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.305619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.305636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.305652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.305669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.305686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.305704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.305720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.305737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.305753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.305770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.305786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.305803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.305819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.305840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.305857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.305873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.305890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.305906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.305924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.305941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.305957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.305974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.305990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.305999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.306007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.306016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.306023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.306031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156b00 is same with the state(5) to be set 00:21:07.777 [2024-04-26 14:57:50.307302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.307317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.307329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.307338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.307349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.307358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.307369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.307377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.307387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.307395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.307406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.307413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.307423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.307430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.307439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.307446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.307455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.777 [2024-04-26 14:57:50.307462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.777 [2024-04-26 14:57:50.307472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.307988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.307997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.308005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.308014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.308023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.308032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.308039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.308048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.308055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.308064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.308071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.308080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.308088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.308097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.308104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.778 [2024-04-26 14:57:50.308113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.778 [2024-04-26 14:57:50.308120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.308129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.308136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.308145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.308152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.308161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.308169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.308178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.308185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.308194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.308201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.308210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.308217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.308229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.308236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.308245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.308253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.308262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.308269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.308278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.308285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.308294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.308301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.308310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.308317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.308327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.308334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.308343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.308350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.308359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.308366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.308374] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157fb0 is same with the state(5) to be set 00:21:07.779 [2024-04-26 14:57:50.309644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.309657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.309670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.309677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.309687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.309694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.309706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.309713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.309722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.309729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.309739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.309746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.309755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.309762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.309771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.309778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.309788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.309795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.309805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.309812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.309821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.309828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.309842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.309850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.309859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.309867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.309876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.309883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.309892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.309899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.309908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.309917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.309926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.309933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.309942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.309949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.309958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.309965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.309975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.309982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.309992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.309999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.310008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.310016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.310025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.310032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.779 [2024-04-26 14:57:50.310041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.779 [2024-04-26 14:57:50.310048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.780 [2024-04-26 14:57:50.310693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.780 [2024-04-26 14:57:50.310702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.781 [2024-04-26 14:57:50.310709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.781 [2024-04-26 14:57:50.310717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2216ce0 is same with the state(5) to be set 00:21:07.781 [2024-04-26 14:57:50.312313] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:07.781 [2024-04-26 14:57:50.312337] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:07.781 [2024-04-26 14:57:50.312347] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:07.781 [2024-04-26 14:57:50.312360] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:07.781 [2024-04-26 14:57:50.312435] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:07.781 [2024-04-26 14:57:50.312450] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:07.781 [2024-04-26 14:57:50.312527] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:07.781 [2024-04-26 14:57:50.312538] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:07.781 [2024-04-26 14:57:50.313062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.781 [2024-04-26 14:57:50.313473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.781 [2024-04-26 14:57:50.313487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215e0c0 with addr=10.0.0.2, port=4420 00:21:07.781 [2024-04-26 14:57:50.313498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215e0c0 is same with the state(5) to be set 00:21:07.781 [2024-04-26 14:57:50.313586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.781 [2024-04-26 14:57:50.313644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.781 [2024-04-26 14:57:50.313654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2160e30 with addr=10.0.0.2, port=4420 00:21:07.781 [2024-04-26 14:57:50.313661] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2160e30 is same with the state(5) to be set 00:21:07.781 [2024-04-26 14:57:50.313747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.781 [2024-04-26 14:57:50.314143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.781 [2024-04-26 14:57:50.314153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x216c8f0 with addr=10.0.0.2, port=4420 00:21:07.781 [2024-04-26 14:57:50.314161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216c8f0 is same with the state(5) to be set 00:21:07.781 [2024-04-26 14:57:50.314485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.781 [2024-04-26 14:57:50.314695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.781 [2024-04-26 14:57:50.314704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224d400 with addr=10.0.0.2, port=4420 00:21:07.781 [2024-04-26 14:57:50.314712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224d400 is same with the state(5) to be set 00:21:07.781 [2024-04-26 14:57:50.316060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.781 [2024-04-26 14:57:50.316075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.781 [2024-04-26 14:57:50.316091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.781 [2024-04-26 14:57:50.316099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.781 [2024-04-26 14:57:50.316109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.781 [2024-04-26 14:57:50.316117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.781 [2024-04-26 14:57:50.316126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.781 [2024-04-26 14:57:50.316133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.781 [2024-04-26 14:57:50.316143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.781 [2024-04-26 14:57:50.316155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.781 [2024-04-26 14:57:50.316165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.781 [2024-04-26 14:57:50.316173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.781 [2024-04-26 14:57:50.316182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.781 [2024-04-26 14:57:50.316189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.781 [2024-04-26 14:57:50.316198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.781 [2024-04-26 14:57:50.316205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.781 [2024-04-26 14:57:50.316215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.781 [2024-04-26 14:57:50.316223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.781 [2024-04-26 14:57:50.316232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.781 [2024-04-26 14:57:50.316239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.781 [2024-04-26 14:57:50.316249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.781 [2024-04-26 14:57:50.316256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.781 [2024-04-26 14:57:50.316265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.781 [2024-04-26 14:57:50.316272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.781 [2024-04-26 14:57:50.316281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.781 [2024-04-26 14:57:50.316288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.781 [2024-04-26 14:57:50.316298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.781 [2024-04-26 14:57:50.316305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.781 [2024-04-26 14:57:50.316314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.781 [2024-04-26 14:57:50.316321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.781 [2024-04-26 14:57:50.316330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.781 [2024-04-26 14:57:50.316338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.781 [2024-04-26 14:57:50.316347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.781 [2024-04-26 14:57:50.316354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.781 [2024-04-26 14:57:50.316365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.781 [2024-04-26 14:57:50.316372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.781 [2024-04-26 14:57:50.316381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.781 [2024-04-26 14:57:50.316388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.781 [2024-04-26 14:57:50.316397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.781 [2024-04-26 14:57:50.316404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.781 [2024-04-26 14:57:50.316414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.781 [2024-04-26 14:57:50.316421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.781 [2024-04-26 14:57:50.316430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.781 [2024-04-26 14:57:50.316437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.781 [2024-04-26 14:57:50.316447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.781 [2024-04-26 14:57:50.316454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.781 [2024-04-26 14:57:50.316463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.781 [2024-04-26 14:57:50.316470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.781 [2024-04-26 14:57:50.316480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.781 [2024-04-26 14:57:50.316487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.316988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.316998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.317005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.317015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.317022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.317031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.317038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.317047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.317055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.317064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.317071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.317080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.317087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.317096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.317103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.317113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.317120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.317129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.782 [2024-04-26 14:57:50.317136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.782 [2024-04-26 14:57:50.317145] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215850 is same with the state(5) to be set 00:21:07.782 [2024-04-26 14:57:50.319232] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:07.782 [2024-04-26 14:57:50.319255] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:07.782 [2024-04-26 14:57:50.319266] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:07.782 task offset: 30208 on job bdev=Nvme8n1 fails 00:21:07.782 00:21:07.782 Latency(us) 00:21:07.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.782 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.782 Job: Nvme1n1 ended in about 0.95 seconds with error 00:21:07.782 Verification LBA range: start 0x0 length 0x400 00:21:07.783 Nvme1n1 : 0.95 135.06 8.44 67.53 0.00 312512.28 15073.28 253405.87 00:21:07.783 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.783 Job: Nvme2n1 ended in about 0.94 seconds with error 00:21:07.783 Verification LBA range: start 0x0 length 0x400 00:21:07.783 Nvme2n1 : 0.94 210.29 13.14 68.32 0.00 222501.73 7755.09 251658.24 00:21:07.783 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.783 Job: Nvme3n1 ended in about 0.95 seconds with error 00:21:07.783 Verification LBA range: start 0x0 length 0x400 00:21:07.783 Nvme3n1 : 0.95 202.08 12.63 67.36 0.00 225407.15 33860.27 228939.09 00:21:07.783 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.783 Job: Nvme4n1 ended in about 0.94 seconds with error 00:21:07.783 Verification LBA range: start 0x0 length 0x400 00:21:07.783 Nvme4n1 : 0.94 204.69 12.79 68.23 0.00 217687.25 17039.36 244667.73 00:21:07.783 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.783 Job: Nvme5n1 ended in about 0.95 seconds with error 00:21:07.783 Verification LBA range: start 0x0 length 0x400 00:21:07.783 Nvme5n1 : 0.95 201.58 12.60 67.19 0.00 216509.87 16384.00 227191.47 00:21:07.783 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.783 Job: Nvme6n1 ended in about 0.95 seconds with error 00:21:07.783 Verification LBA range: start 0x0 length 0x400 00:21:07.783 Nvme6n1 : 0.95 134.05 8.38 67.03 0.00 283312.07 18896.21 276125.01 00:21:07.783 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.783 Job: Nvme7n1 ended in about 0.96 seconds with error 00:21:07.783 Verification LBA range: start 0x0 length 0x400 00:21:07.783 Nvme7n1 : 0.96 200.59 12.54 66.86 0.00 208284.59 13981.01 249910.61 00:21:07.783 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.783 Job: Nvme8n1 ended in about 0.94 seconds with error 00:21:07.783 Verification LBA range: start 0x0 length 0x400 00:21:07.783 Nvme8n1 : 0.94 205.30 12.83 68.43 0.00 198019.41 19114.67 246415.36 00:21:07.783 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.783 Job: Nvme9n1 ended in about 0.97 seconds with error 00:21:07.783 Verification LBA range: start 0x0 length 0x400 00:21:07.783 Nvme9n1 : 0.97 132.51 8.28 66.26 0.00 268116.20 22282.24 248162.99 00:21:07.783 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:07.783 Job: Nvme10n1 ended in about 0.96 seconds with error 00:21:07.783 Verification LBA range: start 0x0 length 0x400 00:21:07.783 Nvme10n1 : 0.96 137.57 8.60 66.70 0.00 254435.16 19770.03 267386.88 00:21:07.783 =================================================================================================================== 00:21:07.783 Total : 1763.72 110.23 673.91 0.00 236355.96 7755.09 276125.01 00:21:07.783 [2024-04-26 14:57:50.348055] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:07.783 [2024-04-26 14:57:50.348087] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:07.783 [2024-04-26 14:57:50.348369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.783 [2024-04-26 14:57:50.348771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.783 [2024-04-26 14:57:50.348781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a6c10 with addr=10.0.0.2, port=4420 00:21:07.783 [2024-04-26 14:57:50.348790] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6c10 is same with the state(5) to be set 00:21:07.783 [2024-04-26 14:57:50.349140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.783 [2024-04-26 14:57:50.349341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.783 [2024-04-26 14:57:50.349350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2242a30 with addr=10.0.0.2, port=4420 00:21:07.783 [2024-04-26 14:57:50.349363] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2242a30 is same with the state(5) to be set 00:21:07.783 [2024-04-26 14:57:50.349376] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215e0c0 (9): Bad file descriptor 00:21:07.783 [2024-04-26 14:57:50.349387] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2160e30 (9): Bad file descriptor 00:21:07.783 [2024-04-26 14:57:50.349397] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x216c8f0 (9): Bad file descriptor 00:21:07.783 [2024-04-26 14:57:50.349406] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224d400 (9): Bad file descriptor 00:21:07.783 [2024-04-26 14:57:50.349949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.783 [2024-04-26 14:57:50.350292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.783 [2024-04-26 14:57:50.350301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cbefd0 with addr=10.0.0.2, port=4420 00:21:07.783 [2024-04-26 14:57:50.350309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbefd0 is same with the state(5) to be set 00:21:07.783 [2024-04-26 14:57:50.350671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.783 [2024-04-26 14:57:50.351018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.783 [2024-04-26 14:57:50.351027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23000f0 with addr=10.0.0.2, port=4420 00:21:07.783 [2024-04-26 14:57:50.351034] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23000f0 is same with the state(5) to be set 00:21:07.783 [2024-04-26 14:57:50.351371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.783 [2024-04-26 14:57:50.351703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.783 [2024-04-26 14:57:50.351712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2181790 with addr=10.0.0.2, port=4420 00:21:07.783 [2024-04-26 14:57:50.351719] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2181790 is same with the state(5) to be set 00:21:07.783 [2024-04-26 14:57:50.351928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.783 [2024-04-26 14:57:50.352188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.783 [2024-04-26 14:57:50.352201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21807c0 with addr=10.0.0.2, port=4420 00:21:07.783 [2024-04-26 14:57:50.352208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21807c0 is same with the state(5) to be set 00:21:07.783 [2024-04-26 14:57:50.352218] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a6c10 (9): Bad file descriptor 00:21:07.783 [2024-04-26 14:57:50.352227] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2242a30 (9): Bad file descriptor 00:21:07.783 [2024-04-26 14:57:50.352236] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:07.783 [2024-04-26 14:57:50.352242] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:07.783 [2024-04-26 14:57:50.352251] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:07.783 [2024-04-26 14:57:50.352263] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:07.783 [2024-04-26 14:57:50.352269] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:07.783 [2024-04-26 14:57:50.352276] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:07.783 [2024-04-26 14:57:50.352287] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:07.783 [2024-04-26 14:57:50.352293] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:07.783 [2024-04-26 14:57:50.352303] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:07.783 [2024-04-26 14:57:50.352313] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:07.783 [2024-04-26 14:57:50.352319] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:07.783 [2024-04-26 14:57:50.352326] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:07.783 [2024-04-26 14:57:50.352346] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:07.783 [2024-04-26 14:57:50.352358] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:07.783 [2024-04-26 14:57:50.352367] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:07.783 [2024-04-26 14:57:50.352378] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:07.783 [2024-04-26 14:57:50.352388] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:07.783 [2024-04-26 14:57:50.352398] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:07.783 [2024-04-26 14:57:50.352741] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:07.783 [2024-04-26 14:57:50.352751] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:07.783 [2024-04-26 14:57:50.352757] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:07.783 [2024-04-26 14:57:50.352764] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:07.783 [2024-04-26 14:57:50.352771] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbefd0 (9): Bad file descriptor 00:21:07.783 [2024-04-26 14:57:50.352781] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23000f0 (9): Bad file descriptor 00:21:07.783 [2024-04-26 14:57:50.352790] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2181790 (9): Bad file descriptor 00:21:07.783 [2024-04-26 14:57:50.352799] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21807c0 (9): Bad file descriptor 00:21:07.783 [2024-04-26 14:57:50.352807] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:07.783 [2024-04-26 14:57:50.352813] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:07.783 [2024-04-26 14:57:50.352820] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:07.783 [2024-04-26 14:57:50.352830] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:07.783 [2024-04-26 14:57:50.352836] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:07.783 [2024-04-26 14:57:50.352848] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:07.783 [2024-04-26 14:57:50.352887] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:07.783 [2024-04-26 14:57:50.352894] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:07.783 [2024-04-26 14:57:50.352900] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:07.783 [2024-04-26 14:57:50.352906] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:07.783 [2024-04-26 14:57:50.352913] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:07.783 [2024-04-26 14:57:50.352922] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:07.783 [2024-04-26 14:57:50.352931] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:07.783 [2024-04-26 14:57:50.352937] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:07.783 [2024-04-26 14:57:50.352946] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:07.784 [2024-04-26 14:57:50.352953] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:07.784 [2024-04-26 14:57:50.352959] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:07.784 [2024-04-26 14:57:50.352969] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:07.784 [2024-04-26 14:57:50.352975] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:07.784 [2024-04-26 14:57:50.352982] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:07.784 [2024-04-26 14:57:50.353015] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:07.784 [2024-04-26 14:57:50.353023] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:07.784 [2024-04-26 14:57:50.353029] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:07.784 [2024-04-26 14:57:50.353035] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:08.043 14:57:50 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:08.043 14:57:50 -- target/shutdown.sh@139 -- # sleep 1 00:21:08.984 14:57:51 -- target/shutdown.sh@142 -- # kill -9 1126845 00:21:08.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1126845) - No such process 00:21:08.984 14:57:51 -- target/shutdown.sh@142 -- # true 00:21:08.984 14:57:51 -- target/shutdown.sh@144 -- # stoptarget 00:21:08.984 14:57:51 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:08.984 14:57:51 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:08.984 14:57:51 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:08.984 14:57:51 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:08.984 14:57:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:08.984 14:57:51 -- nvmf/common.sh@117 -- # sync 00:21:08.984 14:57:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:08.984 14:57:51 -- nvmf/common.sh@120 -- # set +e 00:21:08.984 14:57:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:08.984 14:57:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:08.984 rmmod nvme_tcp 00:21:08.984 rmmod nvme_fabrics 00:21:08.984 rmmod nvme_keyring 00:21:08.984 14:57:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:08.984 14:57:51 -- nvmf/common.sh@124 -- # set -e 00:21:08.984 14:57:51 -- nvmf/common.sh@125 -- # return 0 00:21:08.984 14:57:51 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:21:08.984 14:57:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:08.984 14:57:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:08.984 14:57:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:08.984 14:57:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:08.984 14:57:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:08.984 14:57:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.984 14:57:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.984 14:57:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.529 14:57:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:11.529 00:21:11.529 real 0m7.770s 00:21:11.529 user 0m18.915s 00:21:11.529 sys 0m1.191s 00:21:11.529 14:57:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:11.529 14:57:53 -- common/autotest_common.sh@10 -- # set +x 00:21:11.529 ************************************ 00:21:11.529 END TEST nvmf_shutdown_tc3 00:21:11.529 ************************************ 00:21:11.529 14:57:53 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:11.529 00:21:11.529 real 0m32.368s 00:21:11.529 user 1m15.617s 00:21:11.529 sys 0m9.084s 00:21:11.529 14:57:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:11.529 14:57:53 -- common/autotest_common.sh@10 -- # set +x 00:21:11.529 ************************************ 00:21:11.529 END TEST nvmf_shutdown 00:21:11.529 ************************************ 00:21:11.529 14:57:53 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:21:11.529 14:57:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:11.529 14:57:53 -- common/autotest_common.sh@10 -- # set +x 00:21:11.529 14:57:53 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:21:11.529 14:57:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:11.529 14:57:53 -- common/autotest_common.sh@10 -- # set +x 00:21:11.529 14:57:53 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:21:11.529 14:57:53 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:11.529 14:57:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:11.529 14:57:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:11.529 14:57:53 -- common/autotest_common.sh@10 -- # set +x 00:21:11.529 ************************************ 00:21:11.529 START TEST nvmf_multicontroller 00:21:11.529 ************************************ 00:21:11.529 14:57:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:11.529 * Looking for test storage... 00:21:11.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:11.529 14:57:54 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:11.529 14:57:54 -- nvmf/common.sh@7 -- # uname -s 00:21:11.529 14:57:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:11.529 14:57:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:11.529 14:57:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:11.529 14:57:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:11.529 14:57:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:11.529 14:57:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:11.529 14:57:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:11.529 14:57:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:11.529 14:57:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:11.529 14:57:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:11.529 14:57:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:11.529 14:57:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:11.529 14:57:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:11.529 14:57:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:11.529 14:57:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:11.529 14:57:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:11.529 14:57:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:11.529 14:57:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:11.529 14:57:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:11.529 14:57:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:11.529 14:57:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.529 14:57:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.529 14:57:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.529 14:57:54 -- paths/export.sh@5 -- # export PATH 00:21:11.529 14:57:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.529 14:57:54 -- nvmf/common.sh@47 -- # : 0 00:21:11.529 14:57:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:11.529 14:57:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:11.529 14:57:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:11.529 14:57:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:11.529 14:57:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:11.529 14:57:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:11.529 14:57:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:11.529 14:57:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:11.529 14:57:54 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:11.529 14:57:54 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:11.529 14:57:54 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:11.529 14:57:54 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:11.529 14:57:54 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:11.529 14:57:54 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:11.529 14:57:54 -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:11.529 14:57:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:11.529 14:57:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:11.529 14:57:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:11.529 14:57:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:11.529 14:57:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:11.529 14:57:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.529 14:57:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:11.529 14:57:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.529 14:57:54 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:11.529 14:57:54 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:11.529 14:57:54 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:11.529 14:57:54 -- common/autotest_common.sh@10 -- # set +x 00:21:19.665 14:58:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:19.665 14:58:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:19.665 14:58:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:19.665 14:58:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:19.665 14:58:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:19.665 14:58:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:19.665 14:58:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:19.665 14:58:01 -- nvmf/common.sh@295 -- # net_devs=() 00:21:19.665 14:58:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:19.665 14:58:01 -- nvmf/common.sh@296 -- # e810=() 00:21:19.665 14:58:01 -- nvmf/common.sh@296 -- # local -ga e810 00:21:19.665 14:58:01 -- nvmf/common.sh@297 -- # x722=() 00:21:19.665 14:58:01 -- nvmf/common.sh@297 -- # local -ga x722 00:21:19.665 14:58:01 -- nvmf/common.sh@298 -- # mlx=() 00:21:19.665 14:58:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:19.665 14:58:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:19.665 14:58:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:19.665 14:58:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:19.665 14:58:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:19.665 14:58:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:19.665 14:58:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:19.665 14:58:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:19.665 14:58:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:19.665 14:58:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:19.665 14:58:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:19.665 14:58:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:19.665 14:58:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:19.665 14:58:01 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:19.665 14:58:01 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:19.665 14:58:01 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:19.665 14:58:01 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:19.665 14:58:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:19.665 14:58:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:19.665 14:58:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:19.665 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:19.665 14:58:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:19.665 14:58:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:19.665 14:58:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.665 14:58:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.665 14:58:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:19.665 14:58:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:19.665 14:58:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:19.665 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:19.665 14:58:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:19.665 14:58:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:19.665 14:58:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.665 14:58:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.665 14:58:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:19.665 14:58:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:19.665 14:58:01 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:19.665 14:58:01 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:19.665 14:58:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:19.665 14:58:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.665 14:58:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:19.665 14:58:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.665 14:58:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:19.665 Found net devices under 0000:31:00.0: cvl_0_0 00:21:19.665 14:58:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.665 14:58:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:19.665 14:58:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.665 14:58:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:19.665 14:58:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.665 14:58:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:19.665 Found net devices under 0000:31:00.1: cvl_0_1 00:21:19.665 14:58:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.665 14:58:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:19.665 14:58:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:19.665 14:58:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:19.665 14:58:01 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:19.665 14:58:01 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:19.665 14:58:01 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.665 14:58:01 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:19.665 14:58:01 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:19.665 14:58:01 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:19.665 14:58:01 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:19.665 14:58:01 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:19.665 14:58:01 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:19.665 14:58:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:19.665 14:58:01 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.665 14:58:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:19.665 14:58:01 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:19.665 14:58:01 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:19.665 14:58:01 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:19.665 14:58:01 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:19.665 14:58:01 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:19.665 14:58:01 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:19.665 14:58:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:19.665 14:58:01 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:19.665 14:58:01 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:19.665 14:58:01 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:19.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.557 ms 00:21:19.665 00:21:19.665 --- 10.0.0.2 ping statistics --- 00:21:19.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.665 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:21:19.665 14:58:01 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:19.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:21:19.665 00:21:19.665 --- 10.0.0.1 ping statistics --- 00:21:19.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.665 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:21:19.665 14:58:01 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.665 14:58:01 -- nvmf/common.sh@411 -- # return 0 00:21:19.665 14:58:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:19.665 14:58:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.665 14:58:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:19.665 14:58:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:19.665 14:58:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.665 14:58:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:19.665 14:58:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:19.665 14:58:01 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:19.665 14:58:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:19.665 14:58:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:19.665 14:58:01 -- common/autotest_common.sh@10 -- # set +x 00:21:19.665 14:58:01 -- nvmf/common.sh@470 -- # nvmfpid=1131964 00:21:19.665 14:58:01 -- nvmf/common.sh@471 -- # waitforlisten 1131964 00:21:19.665 14:58:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:19.665 14:58:01 -- common/autotest_common.sh@817 -- # '[' -z 1131964 ']' 00:21:19.665 14:58:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.665 14:58:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:19.665 14:58:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.665 14:58:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:19.666 14:58:01 -- common/autotest_common.sh@10 -- # set +x 00:21:19.666 [2024-04-26 14:58:01.436085] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:19.666 [2024-04-26 14:58:01.436148] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.666 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.666 [2024-04-26 14:58:01.523288] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:19.666 [2024-04-26 14:58:01.614246] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.666 [2024-04-26 14:58:01.614308] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.666 [2024-04-26 14:58:01.614319] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.666 [2024-04-26 14:58:01.614327] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.666 [2024-04-26 14:58:01.614334] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.666 [2024-04-26 14:58:01.614475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.666 [2024-04-26 14:58:01.614627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.666 [2024-04-26 14:58:01.614627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:19.666 14:58:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:19.666 14:58:02 -- common/autotest_common.sh@850 -- # return 0 00:21:19.666 14:58:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:19.666 14:58:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:19.666 14:58:02 -- common/autotest_common.sh@10 -- # set +x 00:21:19.666 14:58:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.666 14:58:02 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:19.666 14:58:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.666 14:58:02 -- common/autotest_common.sh@10 -- # set +x 00:21:19.666 [2024-04-26 14:58:02.267748] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.666 14:58:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.666 14:58:02 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:19.666 14:58:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.666 14:58:02 -- common/autotest_common.sh@10 -- # set +x 00:21:19.666 Malloc0 00:21:19.666 14:58:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.666 14:58:02 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:19.666 14:58:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.666 14:58:02 -- common/autotest_common.sh@10 -- # set +x 00:21:19.666 14:58:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.666 14:58:02 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:19.666 14:58:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.666 14:58:02 -- common/autotest_common.sh@10 -- # set +x 00:21:19.926 14:58:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.926 14:58:02 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:19.926 14:58:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.926 14:58:02 -- common/autotest_common.sh@10 -- # set +x 00:21:19.926 [2024-04-26 14:58:02.346730] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.926 14:58:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.926 14:58:02 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:19.926 14:58:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.926 14:58:02 -- common/autotest_common.sh@10 -- # set +x 00:21:19.926 [2024-04-26 14:58:02.358680] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:19.926 14:58:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.926 14:58:02 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:19.926 14:58:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.926 14:58:02 -- common/autotest_common.sh@10 -- # set +x 00:21:19.926 Malloc1 00:21:19.926 14:58:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.926 14:58:02 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:19.926 14:58:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.926 14:58:02 -- common/autotest_common.sh@10 -- # set +x 00:21:19.926 14:58:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.926 14:58:02 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:19.926 14:58:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.926 14:58:02 -- common/autotest_common.sh@10 -- # set +x 00:21:19.926 14:58:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.926 14:58:02 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:19.926 14:58:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.926 14:58:02 -- common/autotest_common.sh@10 -- # set +x 00:21:19.926 14:58:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.926 14:58:02 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:19.926 14:58:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.926 14:58:02 -- common/autotest_common.sh@10 -- # set +x 00:21:19.926 14:58:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.926 14:58:02 -- host/multicontroller.sh@44 -- # bdevperf_pid=1132076 00:21:19.926 14:58:02 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:19.926 14:58:02 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:19.926 14:58:02 -- host/multicontroller.sh@47 -- # waitforlisten 1132076 /var/tmp/bdevperf.sock 00:21:19.926 14:58:02 -- common/autotest_common.sh@817 -- # '[' -z 1132076 ']' 00:21:19.926 14:58:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:19.926 14:58:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:19.926 14:58:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:19.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:19.926 14:58:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:19.926 14:58:02 -- common/autotest_common.sh@10 -- # set +x 00:21:20.865 14:58:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:20.865 14:58:03 -- common/autotest_common.sh@850 -- # return 0 00:21:20.865 14:58:03 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:20.865 14:58:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:20.865 14:58:03 -- common/autotest_common.sh@10 -- # set +x 00:21:20.865 NVMe0n1 00:21:20.865 14:58:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:20.865 14:58:03 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:20.865 14:58:03 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:20.865 14:58:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:20.865 14:58:03 -- common/autotest_common.sh@10 -- # set +x 00:21:20.865 14:58:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:20.865 1 00:21:20.865 14:58:03 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:20.865 14:58:03 -- common/autotest_common.sh@638 -- # local es=0 00:21:20.865 14:58:03 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:20.865 14:58:03 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:20.865 14:58:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:20.865 14:58:03 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:20.865 14:58:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:20.865 14:58:03 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:20.865 14:58:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:20.865 14:58:03 -- common/autotest_common.sh@10 -- # set +x 00:21:20.865 request: 00:21:20.865 { 00:21:20.865 "name": "NVMe0", 00:21:20.865 "trtype": "tcp", 00:21:20.865 "traddr": "10.0.0.2", 00:21:20.865 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:20.865 "hostaddr": "10.0.0.2", 00:21:20.865 "hostsvcid": "60000", 00:21:20.865 "adrfam": "ipv4", 00:21:20.865 "trsvcid": "4420", 00:21:20.865 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.865 "method": "bdev_nvme_attach_controller", 00:21:20.865 "req_id": 1 00:21:20.865 } 00:21:20.865 Got JSON-RPC error response 00:21:20.865 response: 00:21:20.865 { 00:21:20.865 "code": -114, 00:21:20.865 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:20.865 } 00:21:20.865 14:58:03 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:20.865 14:58:03 -- common/autotest_common.sh@641 -- # es=1 00:21:20.865 14:58:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:20.865 14:58:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:20.865 14:58:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:20.865 14:58:03 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:20.865 14:58:03 -- common/autotest_common.sh@638 -- # local es=0 00:21:20.865 14:58:03 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:20.865 14:58:03 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:20.865 14:58:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:20.865 14:58:03 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:20.865 14:58:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:20.865 14:58:03 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:20.865 14:58:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:20.865 14:58:03 -- common/autotest_common.sh@10 -- # set +x 00:21:20.865 request: 00:21:20.865 { 00:21:20.865 "name": "NVMe0", 00:21:20.865 "trtype": "tcp", 00:21:20.865 "traddr": "10.0.0.2", 00:21:20.865 "hostaddr": "10.0.0.2", 00:21:20.865 "hostsvcid": "60000", 00:21:20.865 "adrfam": "ipv4", 00:21:20.865 "trsvcid": "4420", 00:21:20.865 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:20.865 "method": "bdev_nvme_attach_controller", 00:21:20.865 "req_id": 1 00:21:20.865 } 00:21:20.865 Got JSON-RPC error response 00:21:20.865 response: 00:21:20.865 { 00:21:20.865 "code": -114, 00:21:20.865 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:20.865 } 00:21:20.865 14:58:03 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:20.865 14:58:03 -- common/autotest_common.sh@641 -- # es=1 00:21:20.865 14:58:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:20.865 14:58:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:20.865 14:58:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:20.865 14:58:03 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:20.865 14:58:03 -- common/autotest_common.sh@638 -- # local es=0 00:21:20.865 14:58:03 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:20.865 14:58:03 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:20.865 14:58:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:20.865 14:58:03 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:20.865 14:58:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:20.865 14:58:03 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:20.865 14:58:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:20.865 14:58:03 -- common/autotest_common.sh@10 -- # set +x 00:21:20.865 request: 00:21:20.865 { 00:21:20.865 "name": "NVMe0", 00:21:20.865 "trtype": "tcp", 00:21:20.865 "traddr": "10.0.0.2", 00:21:20.865 "hostaddr": "10.0.0.2", 00:21:20.865 "hostsvcid": "60000", 00:21:20.865 "adrfam": "ipv4", 00:21:20.865 "trsvcid": "4420", 00:21:20.865 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.865 "multipath": "disable", 00:21:20.865 "method": "bdev_nvme_attach_controller", 00:21:20.865 "req_id": 1 00:21:20.865 } 00:21:20.865 Got JSON-RPC error response 00:21:20.865 response: 00:21:20.865 { 00:21:20.865 "code": -114, 00:21:20.865 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:20.865 } 00:21:20.865 14:58:03 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:20.865 14:58:03 -- common/autotest_common.sh@641 -- # es=1 00:21:20.865 14:58:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:20.865 14:58:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:20.865 14:58:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:20.865 14:58:03 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:20.865 14:58:03 -- common/autotest_common.sh@638 -- # local es=0 00:21:20.865 14:58:03 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:20.865 14:58:03 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:20.866 14:58:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:20.866 14:58:03 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:20.866 14:58:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:20.866 14:58:03 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:20.866 14:58:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:20.866 14:58:03 -- common/autotest_common.sh@10 -- # set +x 00:21:20.866 request: 00:21:20.866 { 00:21:20.866 "name": "NVMe0", 00:21:20.866 "trtype": "tcp", 00:21:20.866 "traddr": "10.0.0.2", 00:21:20.866 "hostaddr": "10.0.0.2", 00:21:20.866 "hostsvcid": "60000", 00:21:20.866 "adrfam": "ipv4", 00:21:20.866 "trsvcid": "4420", 00:21:20.866 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.866 "multipath": "failover", 00:21:20.866 "method": "bdev_nvme_attach_controller", 00:21:20.866 "req_id": 1 00:21:20.866 } 00:21:20.866 Got JSON-RPC error response 00:21:20.866 response: 00:21:20.866 { 00:21:20.866 "code": -114, 00:21:20.866 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:20.866 } 00:21:20.866 14:58:03 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:20.866 14:58:03 -- common/autotest_common.sh@641 -- # es=1 00:21:20.866 14:58:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:20.866 14:58:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:20.866 14:58:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:20.866 14:58:03 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:20.866 14:58:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:20.866 14:58:03 -- common/autotest_common.sh@10 -- # set +x 00:21:21.125 00:21:21.125 14:58:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:21.125 14:58:03 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:21.125 14:58:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:21.125 14:58:03 -- common/autotest_common.sh@10 -- # set +x 00:21:21.125 14:58:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:21.125 14:58:03 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:21.125 14:58:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:21.125 14:58:03 -- common/autotest_common.sh@10 -- # set +x 00:21:21.125 00:21:21.125 14:58:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:21.125 14:58:03 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:21.125 14:58:03 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:21.125 14:58:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:21.125 14:58:03 -- common/autotest_common.sh@10 -- # set +x 00:21:21.125 14:58:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:21.125 14:58:03 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:21.125 14:58:03 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:22.508 0 00:21:22.508 14:58:04 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:22.508 14:58:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.508 14:58:04 -- common/autotest_common.sh@10 -- # set +x 00:21:22.508 14:58:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.508 14:58:04 -- host/multicontroller.sh@100 -- # killprocess 1132076 00:21:22.508 14:58:04 -- common/autotest_common.sh@936 -- # '[' -z 1132076 ']' 00:21:22.508 14:58:04 -- common/autotest_common.sh@940 -- # kill -0 1132076 00:21:22.508 14:58:04 -- common/autotest_common.sh@941 -- # uname 00:21:22.508 14:58:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:22.508 14:58:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1132076 00:21:22.508 14:58:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:22.508 14:58:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:22.508 14:58:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1132076' 00:21:22.508 killing process with pid 1132076 00:21:22.508 14:58:04 -- common/autotest_common.sh@955 -- # kill 1132076 00:21:22.508 14:58:04 -- common/autotest_common.sh@960 -- # wait 1132076 00:21:22.508 14:58:04 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:22.508 14:58:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.508 14:58:04 -- common/autotest_common.sh@10 -- # set +x 00:21:22.508 14:58:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.508 14:58:05 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:22.508 14:58:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.508 14:58:05 -- common/autotest_common.sh@10 -- # set +x 00:21:22.508 14:58:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.508 14:58:05 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:22.508 14:58:05 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:22.508 14:58:05 -- common/autotest_common.sh@1598 -- # read -r file 00:21:22.508 14:58:05 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:22.508 14:58:05 -- common/autotest_common.sh@1597 -- # sort -u 00:21:22.508 14:58:05 -- common/autotest_common.sh@1599 -- # cat 00:21:22.508 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:22.508 [2024-04-26 14:58:02.478789] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:22.508 [2024-04-26 14:58:02.478851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1132076 ] 00:21:22.508 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.508 [2024-04-26 14:58:02.537703] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.508 [2024-04-26 14:58:02.600583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.508 [2024-04-26 14:58:03.656765] bdev.c:4551:bdev_name_add: *ERROR*: Bdev name aae24b05-d239-4f38-a79f-962d8051e844 already exists 00:21:22.508 [2024-04-26 14:58:03.656796] bdev.c:7668:bdev_register: *ERROR*: Unable to add uuid:aae24b05-d239-4f38-a79f-962d8051e844 alias for bdev NVMe1n1 00:21:22.508 [2024-04-26 14:58:03.656806] bdev_nvme.c:4276:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:22.508 Running I/O for 1 seconds... 00:21:22.508 00:21:22.508 Latency(us) 00:21:22.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.508 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:22.508 NVMe0n1 : 1.00 21570.23 84.26 0.00 0.00 5921.62 3986.77 16930.13 00:21:22.508 =================================================================================================================== 00:21:22.508 Total : 21570.23 84.26 0.00 0.00 5921.62 3986.77 16930.13 00:21:22.508 Received shutdown signal, test time was about 1.000000 seconds 00:21:22.508 00:21:22.508 Latency(us) 00:21:22.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.508 =================================================================================================================== 00:21:22.508 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:22.508 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:22.508 14:58:05 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:22.508 14:58:05 -- common/autotest_common.sh@1598 -- # read -r file 00:21:22.508 14:58:05 -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:22.508 14:58:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:22.508 14:58:05 -- nvmf/common.sh@117 -- # sync 00:21:22.508 14:58:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:22.508 14:58:05 -- nvmf/common.sh@120 -- # set +e 00:21:22.508 14:58:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:22.508 14:58:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:22.508 rmmod nvme_tcp 00:21:22.508 rmmod nvme_fabrics 00:21:22.508 rmmod nvme_keyring 00:21:22.508 14:58:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:22.508 14:58:05 -- nvmf/common.sh@124 -- # set -e 00:21:22.508 14:58:05 -- nvmf/common.sh@125 -- # return 0 00:21:22.508 14:58:05 -- nvmf/common.sh@478 -- # '[' -n 1131964 ']' 00:21:22.508 14:58:05 -- nvmf/common.sh@479 -- # killprocess 1131964 00:21:22.508 14:58:05 -- common/autotest_common.sh@936 -- # '[' -z 1131964 ']' 00:21:22.508 14:58:05 -- common/autotest_common.sh@940 -- # kill -0 1131964 00:21:22.508 14:58:05 -- common/autotest_common.sh@941 -- # uname 00:21:22.508 14:58:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:22.508 14:58:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1131964 00:21:22.508 14:58:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:22.508 14:58:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:22.508 14:58:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1131964' 00:21:22.508 killing process with pid 1131964 00:21:22.508 14:58:05 -- common/autotest_common.sh@955 -- # kill 1131964 00:21:22.508 14:58:05 -- common/autotest_common.sh@960 -- # wait 1131964 00:21:22.768 14:58:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:22.768 14:58:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:22.768 14:58:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:22.768 14:58:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:22.768 14:58:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:22.768 14:58:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.768 14:58:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:22.768 14:58:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.312 14:58:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:25.312 00:21:25.312 real 0m13.378s 00:21:25.312 user 0m15.942s 00:21:25.312 sys 0m6.085s 00:21:25.312 14:58:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:25.312 14:58:07 -- common/autotest_common.sh@10 -- # set +x 00:21:25.312 ************************************ 00:21:25.312 END TEST nvmf_multicontroller 00:21:25.312 ************************************ 00:21:25.312 14:58:07 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:25.312 14:58:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:25.312 14:58:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:25.312 14:58:07 -- common/autotest_common.sh@10 -- # set +x 00:21:25.312 ************************************ 00:21:25.312 START TEST nvmf_aer 00:21:25.312 ************************************ 00:21:25.312 14:58:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:25.312 * Looking for test storage... 00:21:25.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:25.312 14:58:07 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:25.312 14:58:07 -- nvmf/common.sh@7 -- # uname -s 00:21:25.312 14:58:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:25.312 14:58:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:25.312 14:58:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:25.312 14:58:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:25.312 14:58:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:25.312 14:58:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:25.312 14:58:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:25.312 14:58:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:25.312 14:58:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:25.312 14:58:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:25.312 14:58:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:25.312 14:58:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:25.312 14:58:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:25.312 14:58:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:25.312 14:58:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:25.312 14:58:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:25.312 14:58:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:25.312 14:58:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.312 14:58:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.312 14:58:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.312 14:58:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.313 14:58:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.313 14:58:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.313 14:58:07 -- paths/export.sh@5 -- # export PATH 00:21:25.313 14:58:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.313 14:58:07 -- nvmf/common.sh@47 -- # : 0 00:21:25.313 14:58:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:25.313 14:58:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:25.313 14:58:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:25.313 14:58:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:25.313 14:58:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:25.313 14:58:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:25.313 14:58:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:25.313 14:58:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:25.313 14:58:07 -- host/aer.sh@11 -- # nvmftestinit 00:21:25.313 14:58:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:25.313 14:58:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:25.313 14:58:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:25.313 14:58:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:25.313 14:58:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:25.313 14:58:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.313 14:58:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:25.313 14:58:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.313 14:58:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:25.313 14:58:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:25.313 14:58:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:25.313 14:58:07 -- common/autotest_common.sh@10 -- # set +x 00:21:31.911 14:58:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:31.911 14:58:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:31.911 14:58:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:31.911 14:58:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:31.911 14:58:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:31.911 14:58:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:31.911 14:58:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:31.911 14:58:14 -- nvmf/common.sh@295 -- # net_devs=() 00:21:31.911 14:58:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:31.911 14:58:14 -- nvmf/common.sh@296 -- # e810=() 00:21:31.911 14:58:14 -- nvmf/common.sh@296 -- # local -ga e810 00:21:31.911 14:58:14 -- nvmf/common.sh@297 -- # x722=() 00:21:31.911 14:58:14 -- nvmf/common.sh@297 -- # local -ga x722 00:21:31.911 14:58:14 -- nvmf/common.sh@298 -- # mlx=() 00:21:31.911 14:58:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:31.911 14:58:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:31.911 14:58:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:31.911 14:58:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:31.911 14:58:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:31.911 14:58:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:31.911 14:58:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:31.911 14:58:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:31.911 14:58:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:31.911 14:58:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:31.911 14:58:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:31.911 14:58:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:31.911 14:58:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:31.911 14:58:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:31.911 14:58:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:31.911 14:58:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:31.911 14:58:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:31.911 14:58:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:31.911 14:58:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:31.911 14:58:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:31.911 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:31.911 14:58:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:31.911 14:58:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:31.911 14:58:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.911 14:58:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.911 14:58:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:31.911 14:58:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:31.911 14:58:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:31.911 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:31.911 14:58:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:31.911 14:58:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:31.911 14:58:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.911 14:58:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.911 14:58:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:31.911 14:58:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:31.911 14:58:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:31.911 14:58:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:31.911 14:58:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:31.911 14:58:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.911 14:58:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:31.911 14:58:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.911 14:58:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:31.911 Found net devices under 0000:31:00.0: cvl_0_0 00:21:31.911 14:58:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.911 14:58:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:31.911 14:58:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.911 14:58:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:31.911 14:58:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.911 14:58:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:31.911 Found net devices under 0000:31:00.1: cvl_0_1 00:21:31.911 14:58:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.911 14:58:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:31.911 14:58:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:31.911 14:58:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:31.911 14:58:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:31.911 14:58:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:31.911 14:58:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:31.911 14:58:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:31.911 14:58:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:31.911 14:58:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:31.911 14:58:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:31.911 14:58:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:31.911 14:58:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:31.911 14:58:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:31.911 14:58:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:31.911 14:58:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:31.911 14:58:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:31.911 14:58:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:31.911 14:58:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:32.172 14:58:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:32.173 14:58:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:32.173 14:58:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:32.173 14:58:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:32.173 14:58:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:32.173 14:58:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:32.173 14:58:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:32.173 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:32.173 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:21:32.173 00:21:32.173 --- 10.0.0.2 ping statistics --- 00:21:32.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.173 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:21:32.173 14:58:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:32.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:32.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:21:32.434 00:21:32.434 --- 10.0.0.1 ping statistics --- 00:21:32.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.434 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:21:32.434 14:58:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:32.434 14:58:14 -- nvmf/common.sh@411 -- # return 0 00:21:32.434 14:58:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:32.434 14:58:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:32.434 14:58:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:32.434 14:58:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:32.434 14:58:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:32.434 14:58:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:32.434 14:58:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:32.434 14:58:14 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:32.434 14:58:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:32.434 14:58:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:32.434 14:58:14 -- common/autotest_common.sh@10 -- # set +x 00:21:32.434 14:58:14 -- nvmf/common.sh@470 -- # nvmfpid=1136827 00:21:32.434 14:58:14 -- nvmf/common.sh@471 -- # waitforlisten 1136827 00:21:32.434 14:58:14 -- common/autotest_common.sh@817 -- # '[' -z 1136827 ']' 00:21:32.434 14:58:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.434 14:58:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:32.434 14:58:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.434 14:58:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:32.434 14:58:14 -- common/autotest_common.sh@10 -- # set +x 00:21:32.434 14:58:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:32.434 [2024-04-26 14:58:14.937294] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:32.434 [2024-04-26 14:58:14.937368] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:32.434 EAL: No free 2048 kB hugepages reported on node 1 00:21:32.434 [2024-04-26 14:58:15.009380] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:32.434 [2024-04-26 14:58:15.083050] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:32.434 [2024-04-26 14:58:15.083090] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:32.434 [2024-04-26 14:58:15.083099] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:32.434 [2024-04-26 14:58:15.083105] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:32.434 [2024-04-26 14:58:15.083111] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:32.434 [2024-04-26 14:58:15.083255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.434 [2024-04-26 14:58:15.083381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:32.434 [2024-04-26 14:58:15.083538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.434 [2024-04-26 14:58:15.083539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:33.378 14:58:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:33.378 14:58:15 -- common/autotest_common.sh@850 -- # return 0 00:21:33.378 14:58:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:33.378 14:58:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:33.378 14:58:15 -- common/autotest_common.sh@10 -- # set +x 00:21:33.378 14:58:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:33.378 14:58:15 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:33.378 14:58:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.378 14:58:15 -- common/autotest_common.sh@10 -- # set +x 00:21:33.378 [2024-04-26 14:58:15.743352] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:33.378 14:58:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.378 14:58:15 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:33.378 14:58:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.378 14:58:15 -- common/autotest_common.sh@10 -- # set +x 00:21:33.378 Malloc0 00:21:33.378 14:58:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.378 14:58:15 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:33.378 14:58:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.378 14:58:15 -- common/autotest_common.sh@10 -- # set +x 00:21:33.378 14:58:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.378 14:58:15 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:33.378 14:58:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.378 14:58:15 -- common/autotest_common.sh@10 -- # set +x 00:21:33.378 14:58:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.378 14:58:15 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:33.378 14:58:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.378 14:58:15 -- common/autotest_common.sh@10 -- # set +x 00:21:33.378 [2024-04-26 14:58:15.802822] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.378 14:58:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.378 14:58:15 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:33.378 14:58:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.378 14:58:15 -- common/autotest_common.sh@10 -- # set +x 00:21:33.378 [2024-04-26 14:58:15.810629] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:33.378 [ 00:21:33.378 { 00:21:33.378 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:33.378 "subtype": "Discovery", 00:21:33.378 "listen_addresses": [], 00:21:33.378 "allow_any_host": true, 00:21:33.378 "hosts": [] 00:21:33.378 }, 00:21:33.378 { 00:21:33.378 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.378 "subtype": "NVMe", 00:21:33.378 "listen_addresses": [ 00:21:33.378 { 00:21:33.378 "transport": "TCP", 00:21:33.378 "trtype": "TCP", 00:21:33.378 "adrfam": "IPv4", 00:21:33.378 "traddr": "10.0.0.2", 00:21:33.378 "trsvcid": "4420" 00:21:33.378 } 00:21:33.378 ], 00:21:33.378 "allow_any_host": true, 00:21:33.378 "hosts": [], 00:21:33.378 "serial_number": "SPDK00000000000001", 00:21:33.378 "model_number": "SPDK bdev Controller", 00:21:33.378 "max_namespaces": 2, 00:21:33.378 "min_cntlid": 1, 00:21:33.378 "max_cntlid": 65519, 00:21:33.378 "namespaces": [ 00:21:33.378 { 00:21:33.378 "nsid": 1, 00:21:33.379 "bdev_name": "Malloc0", 00:21:33.379 "name": "Malloc0", 00:21:33.379 "nguid": "900DEF8049AC4BF690CE68E888F9AB31", 00:21:33.379 "uuid": "900def80-49ac-4bf6-90ce-68e888f9ab31" 00:21:33.379 } 00:21:33.379 ] 00:21:33.379 } 00:21:33.379 ] 00:21:33.379 14:58:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.379 14:58:15 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:33.379 14:58:15 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:33.379 14:58:15 -- host/aer.sh@33 -- # aerpid=1137103 00:21:33.379 14:58:15 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:33.379 14:58:15 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:33.379 14:58:15 -- common/autotest_common.sh@1251 -- # local i=0 00:21:33.379 14:58:15 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:33.379 14:58:15 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:21:33.379 14:58:15 -- common/autotest_common.sh@1254 -- # i=1 00:21:33.379 14:58:15 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:33.379 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.379 14:58:15 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:33.379 14:58:15 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:21:33.379 14:58:15 -- common/autotest_common.sh@1254 -- # i=2 00:21:33.379 14:58:15 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:33.379 14:58:16 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:33.379 14:58:16 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:33.379 14:58:16 -- common/autotest_common.sh@1262 -- # return 0 00:21:33.379 14:58:16 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:33.379 14:58:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.379 14:58:16 -- common/autotest_common.sh@10 -- # set +x 00:21:33.639 Malloc1 00:21:33.640 14:58:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.640 14:58:16 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:33.640 14:58:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.640 14:58:16 -- common/autotest_common.sh@10 -- # set +x 00:21:33.640 14:58:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.640 14:58:16 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:33.640 14:58:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.640 14:58:16 -- common/autotest_common.sh@10 -- # set +x 00:21:33.640 Asynchronous Event Request test 00:21:33.640 Attaching to 10.0.0.2 00:21:33.640 Attached to 10.0.0.2 00:21:33.640 Registering asynchronous event callbacks... 00:21:33.640 Starting namespace attribute notice tests for all controllers... 00:21:33.640 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:33.640 aer_cb - Changed Namespace 00:21:33.640 Cleaning up... 00:21:33.640 [ 00:21:33.640 { 00:21:33.640 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:33.640 "subtype": "Discovery", 00:21:33.640 "listen_addresses": [], 00:21:33.640 "allow_any_host": true, 00:21:33.640 "hosts": [] 00:21:33.640 }, 00:21:33.640 { 00:21:33.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.640 "subtype": "NVMe", 00:21:33.640 "listen_addresses": [ 00:21:33.640 { 00:21:33.640 "transport": "TCP", 00:21:33.640 "trtype": "TCP", 00:21:33.640 "adrfam": "IPv4", 00:21:33.640 "traddr": "10.0.0.2", 00:21:33.640 "trsvcid": "4420" 00:21:33.640 } 00:21:33.640 ], 00:21:33.640 "allow_any_host": true, 00:21:33.640 "hosts": [], 00:21:33.640 "serial_number": "SPDK00000000000001", 00:21:33.640 "model_number": "SPDK bdev Controller", 00:21:33.640 "max_namespaces": 2, 00:21:33.640 "min_cntlid": 1, 00:21:33.640 "max_cntlid": 65519, 00:21:33.640 "namespaces": [ 00:21:33.640 { 00:21:33.640 "nsid": 1, 00:21:33.640 "bdev_name": "Malloc0", 00:21:33.640 "name": "Malloc0", 00:21:33.640 "nguid": "900DEF8049AC4BF690CE68E888F9AB31", 00:21:33.640 "uuid": "900def80-49ac-4bf6-90ce-68e888f9ab31" 00:21:33.640 }, 00:21:33.640 { 00:21:33.640 "nsid": 2, 00:21:33.640 "bdev_name": "Malloc1", 00:21:33.640 "name": "Malloc1", 00:21:33.640 "nguid": "30F12212CAD8475296961B8016129C89", 00:21:33.640 "uuid": "30f12212-cad8-4752-9696-1b8016129c89" 00:21:33.640 } 00:21:33.640 ] 00:21:33.640 } 00:21:33.640 ] 00:21:33.640 14:58:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.640 14:58:16 -- host/aer.sh@43 -- # wait 1137103 00:21:33.640 14:58:16 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:33.640 14:58:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.640 14:58:16 -- common/autotest_common.sh@10 -- # set +x 00:21:33.640 14:58:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.640 14:58:16 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:33.640 14:58:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.640 14:58:16 -- common/autotest_common.sh@10 -- # set +x 00:21:33.640 14:58:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.640 14:58:16 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:33.640 14:58:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.640 14:58:16 -- common/autotest_common.sh@10 -- # set +x 00:21:33.640 14:58:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.640 14:58:16 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:33.640 14:58:16 -- host/aer.sh@51 -- # nvmftestfini 00:21:33.640 14:58:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:33.640 14:58:16 -- nvmf/common.sh@117 -- # sync 00:21:33.640 14:58:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:33.640 14:58:16 -- nvmf/common.sh@120 -- # set +e 00:21:33.640 14:58:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:33.640 14:58:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:33.640 rmmod nvme_tcp 00:21:33.640 rmmod nvme_fabrics 00:21:33.640 rmmod nvme_keyring 00:21:33.640 14:58:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:33.640 14:58:16 -- nvmf/common.sh@124 -- # set -e 00:21:33.640 14:58:16 -- nvmf/common.sh@125 -- # return 0 00:21:33.640 14:58:16 -- nvmf/common.sh@478 -- # '[' -n 1136827 ']' 00:21:33.640 14:58:16 -- nvmf/common.sh@479 -- # killprocess 1136827 00:21:33.640 14:58:16 -- common/autotest_common.sh@936 -- # '[' -z 1136827 ']' 00:21:33.640 14:58:16 -- common/autotest_common.sh@940 -- # kill -0 1136827 00:21:33.640 14:58:16 -- common/autotest_common.sh@941 -- # uname 00:21:33.640 14:58:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:33.640 14:58:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1136827 00:21:33.640 14:58:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:33.640 14:58:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:33.640 14:58:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1136827' 00:21:33.640 killing process with pid 1136827 00:21:33.640 14:58:16 -- common/autotest_common.sh@955 -- # kill 1136827 00:21:33.640 [2024-04-26 14:58:16.287961] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:33.640 14:58:16 -- common/autotest_common.sh@960 -- # wait 1136827 00:21:33.901 14:58:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:33.901 14:58:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:33.901 14:58:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:33.901 14:58:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:33.901 14:58:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:33.901 14:58:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.901 14:58:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:33.901 14:58:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.446 14:58:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:36.446 00:21:36.446 real 0m10.949s 00:21:36.446 user 0m7.409s 00:21:36.446 sys 0m5.760s 00:21:36.446 14:58:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:36.446 14:58:18 -- common/autotest_common.sh@10 -- # set +x 00:21:36.446 ************************************ 00:21:36.446 END TEST nvmf_aer 00:21:36.446 ************************************ 00:21:36.446 14:58:18 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:36.446 14:58:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:36.446 14:58:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:36.446 14:58:18 -- common/autotest_common.sh@10 -- # set +x 00:21:36.446 ************************************ 00:21:36.446 START TEST nvmf_async_init 00:21:36.446 ************************************ 00:21:36.446 14:58:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:36.446 * Looking for test storage... 00:21:36.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:36.446 14:58:18 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:36.446 14:58:18 -- nvmf/common.sh@7 -- # uname -s 00:21:36.446 14:58:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.446 14:58:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.446 14:58:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.446 14:58:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.446 14:58:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.446 14:58:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.446 14:58:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.446 14:58:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.446 14:58:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.446 14:58:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.446 14:58:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:36.446 14:58:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:36.446 14:58:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.446 14:58:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.446 14:58:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:36.446 14:58:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:36.446 14:58:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:36.446 14:58:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.446 14:58:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.446 14:58:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.446 14:58:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.446 14:58:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.447 14:58:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.447 14:58:18 -- paths/export.sh@5 -- # export PATH 00:21:36.447 14:58:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.447 14:58:18 -- nvmf/common.sh@47 -- # : 0 00:21:36.447 14:58:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:36.447 14:58:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:36.447 14:58:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:36.447 14:58:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.447 14:58:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.447 14:58:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:36.447 14:58:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:36.447 14:58:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:36.447 14:58:18 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:36.447 14:58:18 -- host/async_init.sh@14 -- # null_block_size=512 00:21:36.447 14:58:18 -- host/async_init.sh@15 -- # null_bdev=null0 00:21:36.447 14:58:18 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:36.447 14:58:18 -- host/async_init.sh@20 -- # uuidgen 00:21:36.447 14:58:18 -- host/async_init.sh@20 -- # tr -d - 00:21:36.447 14:58:18 -- host/async_init.sh@20 -- # nguid=4dbaa74902c345b496c023d05c15c76b 00:21:36.447 14:58:18 -- host/async_init.sh@22 -- # nvmftestinit 00:21:36.447 14:58:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:36.447 14:58:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:36.447 14:58:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:36.447 14:58:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:36.447 14:58:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:36.447 14:58:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.447 14:58:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:36.447 14:58:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.447 14:58:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:36.447 14:58:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:36.447 14:58:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:36.447 14:58:18 -- common/autotest_common.sh@10 -- # set +x 00:21:44.592 14:58:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:44.592 14:58:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:44.592 14:58:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:44.592 14:58:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:44.592 14:58:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:44.592 14:58:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:44.592 14:58:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:44.592 14:58:25 -- nvmf/common.sh@295 -- # net_devs=() 00:21:44.592 14:58:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:44.592 14:58:25 -- nvmf/common.sh@296 -- # e810=() 00:21:44.592 14:58:25 -- nvmf/common.sh@296 -- # local -ga e810 00:21:44.592 14:58:25 -- nvmf/common.sh@297 -- # x722=() 00:21:44.592 14:58:25 -- nvmf/common.sh@297 -- # local -ga x722 00:21:44.592 14:58:25 -- nvmf/common.sh@298 -- # mlx=() 00:21:44.592 14:58:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:44.592 14:58:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:44.592 14:58:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:44.592 14:58:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:44.592 14:58:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:44.592 14:58:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:44.592 14:58:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:44.592 14:58:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:44.592 14:58:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:44.592 14:58:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:44.592 14:58:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:44.592 14:58:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:44.592 14:58:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:44.592 14:58:25 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:44.592 14:58:25 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:44.592 14:58:25 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:44.592 14:58:25 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:44.592 14:58:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:44.592 14:58:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:44.592 14:58:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:44.592 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:44.592 14:58:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:44.592 14:58:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:44.592 14:58:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.592 14:58:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.592 14:58:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:44.592 14:58:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:44.592 14:58:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:44.592 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:44.592 14:58:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:44.592 14:58:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:44.592 14:58:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.592 14:58:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.592 14:58:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:44.592 14:58:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:44.592 14:58:25 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:44.592 14:58:25 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:44.592 14:58:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:44.593 14:58:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.593 14:58:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:44.593 14:58:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.593 14:58:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:44.593 Found net devices under 0000:31:00.0: cvl_0_0 00:21:44.593 14:58:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.593 14:58:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:44.593 14:58:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.593 14:58:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:44.593 14:58:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.593 14:58:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:44.593 Found net devices under 0000:31:00.1: cvl_0_1 00:21:44.593 14:58:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.593 14:58:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:44.593 14:58:25 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:44.593 14:58:25 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:44.593 14:58:25 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:44.593 14:58:25 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:44.593 14:58:25 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:44.593 14:58:25 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:44.593 14:58:25 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:44.593 14:58:25 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:44.593 14:58:25 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:44.593 14:58:25 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:44.593 14:58:25 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:44.593 14:58:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:44.593 14:58:25 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:44.593 14:58:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:44.593 14:58:25 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:44.593 14:58:25 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:44.593 14:58:25 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:44.593 14:58:26 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:44.593 14:58:26 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:44.593 14:58:26 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:44.593 14:58:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:44.593 14:58:26 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:44.593 14:58:26 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:44.593 14:58:26 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:44.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:44.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.595 ms 00:21:44.593 00:21:44.593 --- 10.0.0.2 ping statistics --- 00:21:44.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.593 rtt min/avg/max/mdev = 0.595/0.595/0.595/0.000 ms 00:21:44.593 14:58:26 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:44.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:44.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:21:44.593 00:21:44.593 --- 10.0.0.1 ping statistics --- 00:21:44.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.593 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:21:44.593 14:58:26 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:44.593 14:58:26 -- nvmf/common.sh@411 -- # return 0 00:21:44.593 14:58:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:44.593 14:58:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:44.593 14:58:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:44.593 14:58:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:44.593 14:58:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:44.593 14:58:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:44.593 14:58:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:44.593 14:58:26 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:44.593 14:58:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:44.593 14:58:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:44.593 14:58:26 -- common/autotest_common.sh@10 -- # set +x 00:21:44.593 14:58:26 -- nvmf/common.sh@470 -- # nvmfpid=1141488 00:21:44.593 14:58:26 -- nvmf/common.sh@471 -- # waitforlisten 1141488 00:21:44.593 14:58:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:44.593 14:58:26 -- common/autotest_common.sh@817 -- # '[' -z 1141488 ']' 00:21:44.593 14:58:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.593 14:58:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:44.593 14:58:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.593 14:58:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:44.593 14:58:26 -- common/autotest_common.sh@10 -- # set +x 00:21:44.593 [2024-04-26 14:58:26.275570] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:44.593 [2024-04-26 14:58:26.275634] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.593 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.593 [2024-04-26 14:58:26.346763] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.593 [2024-04-26 14:58:26.419127] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.593 [2024-04-26 14:58:26.419166] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.593 [2024-04-26 14:58:26.419174] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.593 [2024-04-26 14:58:26.419181] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.593 [2024-04-26 14:58:26.419186] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.593 [2024-04-26 14:58:26.419211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.593 14:58:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:44.593 14:58:27 -- common/autotest_common.sh@850 -- # return 0 00:21:44.593 14:58:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:44.593 14:58:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:44.593 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:44.593 14:58:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.593 14:58:27 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:44.593 14:58:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:44.593 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:44.593 [2024-04-26 14:58:27.086128] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.593 14:58:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:44.593 14:58:27 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:44.593 14:58:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:44.593 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:44.593 null0 00:21:44.593 14:58:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:44.593 14:58:27 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:44.593 14:58:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:44.593 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:44.593 14:58:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:44.593 14:58:27 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:44.593 14:58:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:44.593 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:44.593 14:58:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:44.593 14:58:27 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 4dbaa74902c345b496c023d05c15c76b 00:21:44.593 14:58:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:44.593 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:44.593 14:58:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:44.593 14:58:27 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:44.593 14:58:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:44.593 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:44.593 [2024-04-26 14:58:27.142381] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:44.593 14:58:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:44.593 14:58:27 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:44.593 14:58:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:44.593 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:44.855 nvme0n1 00:21:44.855 14:58:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:44.855 14:58:27 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:44.855 14:58:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:44.855 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:44.855 [ 00:21:44.855 { 00:21:44.855 "name": "nvme0n1", 00:21:44.855 "aliases": [ 00:21:44.855 "4dbaa749-02c3-45b4-96c0-23d05c15c76b" 00:21:44.855 ], 00:21:44.855 "product_name": "NVMe disk", 00:21:44.855 "block_size": 512, 00:21:44.855 "num_blocks": 2097152, 00:21:44.855 "uuid": "4dbaa749-02c3-45b4-96c0-23d05c15c76b", 00:21:44.855 "assigned_rate_limits": { 00:21:44.855 "rw_ios_per_sec": 0, 00:21:44.855 "rw_mbytes_per_sec": 0, 00:21:44.855 "r_mbytes_per_sec": 0, 00:21:44.855 "w_mbytes_per_sec": 0 00:21:44.855 }, 00:21:44.855 "claimed": false, 00:21:44.855 "zoned": false, 00:21:44.855 "supported_io_types": { 00:21:44.855 "read": true, 00:21:44.855 "write": true, 00:21:44.855 "unmap": false, 00:21:44.855 "write_zeroes": true, 00:21:44.855 "flush": true, 00:21:44.855 "reset": true, 00:21:44.855 "compare": true, 00:21:44.855 "compare_and_write": true, 00:21:44.855 "abort": true, 00:21:44.855 "nvme_admin": true, 00:21:44.855 "nvme_io": true 00:21:44.855 }, 00:21:44.855 "memory_domains": [ 00:21:44.855 { 00:21:44.855 "dma_device_id": "system", 00:21:44.855 "dma_device_type": 1 00:21:44.855 } 00:21:44.855 ], 00:21:44.855 "driver_specific": { 00:21:44.855 "nvme": [ 00:21:44.855 { 00:21:44.855 "trid": { 00:21:44.855 "trtype": "TCP", 00:21:44.855 "adrfam": "IPv4", 00:21:44.855 "traddr": "10.0.0.2", 00:21:44.855 "trsvcid": "4420", 00:21:44.855 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:44.855 }, 00:21:44.855 "ctrlr_data": { 00:21:44.855 "cntlid": 1, 00:21:44.855 "vendor_id": "0x8086", 00:21:44.855 "model_number": "SPDK bdev Controller", 00:21:44.855 "serial_number": "00000000000000000000", 00:21:44.855 "firmware_revision": "24.05", 00:21:44.855 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:44.855 "oacs": { 00:21:44.855 "security": 0, 00:21:44.855 "format": 0, 00:21:44.855 "firmware": 0, 00:21:44.855 "ns_manage": 0 00:21:44.855 }, 00:21:44.855 "multi_ctrlr": true, 00:21:44.855 "ana_reporting": false 00:21:44.855 }, 00:21:44.855 "vs": { 00:21:44.855 "nvme_version": "1.3" 00:21:44.855 }, 00:21:44.855 "ns_data": { 00:21:44.855 "id": 1, 00:21:44.855 "can_share": true 00:21:44.855 } 00:21:44.855 } 00:21:44.855 ], 00:21:44.855 "mp_policy": "active_passive" 00:21:44.855 } 00:21:44.856 } 00:21:44.856 ] 00:21:44.856 14:58:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:44.856 14:58:27 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:44.856 14:58:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:44.856 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:44.856 [2024-04-26 14:58:27.406936] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:44.856 [2024-04-26 14:58:27.406998] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x90b550 (9): Bad file descriptor 00:21:45.118 [2024-04-26 14:58:27.538937] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:45.118 14:58:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.118 14:58:27 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:45.118 14:58:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.118 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:45.118 [ 00:21:45.118 { 00:21:45.118 "name": "nvme0n1", 00:21:45.118 "aliases": [ 00:21:45.118 "4dbaa749-02c3-45b4-96c0-23d05c15c76b" 00:21:45.118 ], 00:21:45.118 "product_name": "NVMe disk", 00:21:45.118 "block_size": 512, 00:21:45.118 "num_blocks": 2097152, 00:21:45.118 "uuid": "4dbaa749-02c3-45b4-96c0-23d05c15c76b", 00:21:45.118 "assigned_rate_limits": { 00:21:45.118 "rw_ios_per_sec": 0, 00:21:45.118 "rw_mbytes_per_sec": 0, 00:21:45.118 "r_mbytes_per_sec": 0, 00:21:45.118 "w_mbytes_per_sec": 0 00:21:45.118 }, 00:21:45.118 "claimed": false, 00:21:45.118 "zoned": false, 00:21:45.118 "supported_io_types": { 00:21:45.118 "read": true, 00:21:45.118 "write": true, 00:21:45.118 "unmap": false, 00:21:45.118 "write_zeroes": true, 00:21:45.118 "flush": true, 00:21:45.118 "reset": true, 00:21:45.118 "compare": true, 00:21:45.118 "compare_and_write": true, 00:21:45.118 "abort": true, 00:21:45.118 "nvme_admin": true, 00:21:45.118 "nvme_io": true 00:21:45.118 }, 00:21:45.118 "memory_domains": [ 00:21:45.118 { 00:21:45.118 "dma_device_id": "system", 00:21:45.118 "dma_device_type": 1 00:21:45.118 } 00:21:45.118 ], 00:21:45.118 "driver_specific": { 00:21:45.118 "nvme": [ 00:21:45.118 { 00:21:45.118 "trid": { 00:21:45.118 "trtype": "TCP", 00:21:45.118 "adrfam": "IPv4", 00:21:45.118 "traddr": "10.0.0.2", 00:21:45.118 "trsvcid": "4420", 00:21:45.118 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:45.118 }, 00:21:45.118 "ctrlr_data": { 00:21:45.118 "cntlid": 2, 00:21:45.118 "vendor_id": "0x8086", 00:21:45.118 "model_number": "SPDK bdev Controller", 00:21:45.118 "serial_number": "00000000000000000000", 00:21:45.118 "firmware_revision": "24.05", 00:21:45.118 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:45.118 "oacs": { 00:21:45.118 "security": 0, 00:21:45.118 "format": 0, 00:21:45.118 "firmware": 0, 00:21:45.118 "ns_manage": 0 00:21:45.118 }, 00:21:45.118 "multi_ctrlr": true, 00:21:45.118 "ana_reporting": false 00:21:45.118 }, 00:21:45.118 "vs": { 00:21:45.118 "nvme_version": "1.3" 00:21:45.118 }, 00:21:45.118 "ns_data": { 00:21:45.118 "id": 1, 00:21:45.118 "can_share": true 00:21:45.118 } 00:21:45.118 } 00:21:45.118 ], 00:21:45.118 "mp_policy": "active_passive" 00:21:45.118 } 00:21:45.118 } 00:21:45.118 ] 00:21:45.118 14:58:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.118 14:58:27 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:45.118 14:58:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.118 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:45.118 14:58:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.118 14:58:27 -- host/async_init.sh@53 -- # mktemp 00:21:45.118 14:58:27 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.kDydCIWHEd 00:21:45.118 14:58:27 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:45.118 14:58:27 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.kDydCIWHEd 00:21:45.118 14:58:27 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:45.118 14:58:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.118 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:45.118 14:58:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.118 14:58:27 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:45.118 14:58:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.118 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:45.118 [2024-04-26 14:58:27.603551] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:45.118 [2024-04-26 14:58:27.603663] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:45.118 14:58:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.118 14:58:27 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kDydCIWHEd 00:21:45.118 14:58:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.118 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:45.118 [2024-04-26 14:58:27.615580] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:45.118 14:58:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.118 14:58:27 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kDydCIWHEd 00:21:45.118 14:58:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.118 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:45.118 [2024-04-26 14:58:27.627614] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:45.119 [2024-04-26 14:58:27.627652] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:45.119 nvme0n1 00:21:45.119 14:58:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.119 14:58:27 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:45.119 14:58:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.119 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:45.119 [ 00:21:45.119 { 00:21:45.119 "name": "nvme0n1", 00:21:45.119 "aliases": [ 00:21:45.119 "4dbaa749-02c3-45b4-96c0-23d05c15c76b" 00:21:45.119 ], 00:21:45.119 "product_name": "NVMe disk", 00:21:45.119 "block_size": 512, 00:21:45.119 "num_blocks": 2097152, 00:21:45.119 "uuid": "4dbaa749-02c3-45b4-96c0-23d05c15c76b", 00:21:45.119 "assigned_rate_limits": { 00:21:45.119 "rw_ios_per_sec": 0, 00:21:45.119 "rw_mbytes_per_sec": 0, 00:21:45.119 "r_mbytes_per_sec": 0, 00:21:45.119 "w_mbytes_per_sec": 0 00:21:45.119 }, 00:21:45.119 "claimed": false, 00:21:45.119 "zoned": false, 00:21:45.119 "supported_io_types": { 00:21:45.119 "read": true, 00:21:45.119 "write": true, 00:21:45.119 "unmap": false, 00:21:45.119 "write_zeroes": true, 00:21:45.119 "flush": true, 00:21:45.119 "reset": true, 00:21:45.119 "compare": true, 00:21:45.119 "compare_and_write": true, 00:21:45.119 "abort": true, 00:21:45.119 "nvme_admin": true, 00:21:45.119 "nvme_io": true 00:21:45.119 }, 00:21:45.119 "memory_domains": [ 00:21:45.119 { 00:21:45.119 "dma_device_id": "system", 00:21:45.119 "dma_device_type": 1 00:21:45.119 } 00:21:45.119 ], 00:21:45.119 "driver_specific": { 00:21:45.119 "nvme": [ 00:21:45.119 { 00:21:45.119 "trid": { 00:21:45.119 "trtype": "TCP", 00:21:45.119 "adrfam": "IPv4", 00:21:45.119 "traddr": "10.0.0.2", 00:21:45.119 "trsvcid": "4421", 00:21:45.119 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:45.119 }, 00:21:45.119 "ctrlr_data": { 00:21:45.119 "cntlid": 3, 00:21:45.119 "vendor_id": "0x8086", 00:21:45.119 "model_number": "SPDK bdev Controller", 00:21:45.119 "serial_number": "00000000000000000000", 00:21:45.119 "firmware_revision": "24.05", 00:21:45.119 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:45.119 "oacs": { 00:21:45.119 "security": 0, 00:21:45.119 "format": 0, 00:21:45.119 "firmware": 0, 00:21:45.119 "ns_manage": 0 00:21:45.119 }, 00:21:45.119 "multi_ctrlr": true, 00:21:45.119 "ana_reporting": false 00:21:45.119 }, 00:21:45.119 "vs": { 00:21:45.119 "nvme_version": "1.3" 00:21:45.119 }, 00:21:45.119 "ns_data": { 00:21:45.119 "id": 1, 00:21:45.119 "can_share": true 00:21:45.119 } 00:21:45.119 } 00:21:45.119 ], 00:21:45.119 "mp_policy": "active_passive" 00:21:45.119 } 00:21:45.119 } 00:21:45.119 ] 00:21:45.119 14:58:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.119 14:58:27 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:45.119 14:58:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:45.119 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:45.119 14:58:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.119 14:58:27 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.kDydCIWHEd 00:21:45.119 14:58:27 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:45.119 14:58:27 -- host/async_init.sh@78 -- # nvmftestfini 00:21:45.119 14:58:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:45.119 14:58:27 -- nvmf/common.sh@117 -- # sync 00:21:45.119 14:58:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:45.119 14:58:27 -- nvmf/common.sh@120 -- # set +e 00:21:45.119 14:58:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:45.119 14:58:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:45.119 rmmod nvme_tcp 00:21:45.119 rmmod nvme_fabrics 00:21:45.381 rmmod nvme_keyring 00:21:45.381 14:58:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:45.381 14:58:27 -- nvmf/common.sh@124 -- # set -e 00:21:45.381 14:58:27 -- nvmf/common.sh@125 -- # return 0 00:21:45.381 14:58:27 -- nvmf/common.sh@478 -- # '[' -n 1141488 ']' 00:21:45.381 14:58:27 -- nvmf/common.sh@479 -- # killprocess 1141488 00:21:45.381 14:58:27 -- common/autotest_common.sh@936 -- # '[' -z 1141488 ']' 00:21:45.381 14:58:27 -- common/autotest_common.sh@940 -- # kill -0 1141488 00:21:45.381 14:58:27 -- common/autotest_common.sh@941 -- # uname 00:21:45.381 14:58:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:45.381 14:58:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1141488 00:21:45.381 14:58:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:45.381 14:58:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:45.381 14:58:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1141488' 00:21:45.381 killing process with pid 1141488 00:21:45.381 14:58:27 -- common/autotest_common.sh@955 -- # kill 1141488 00:21:45.381 [2024-04-26 14:58:27.882331] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:45.381 [2024-04-26 14:58:27.882358] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:45.381 14:58:27 -- common/autotest_common.sh@960 -- # wait 1141488 00:21:45.381 14:58:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:45.381 14:58:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:45.381 14:58:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:45.381 14:58:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:45.381 14:58:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:45.381 14:58:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.381 14:58:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.381 14:58:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.993 14:58:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:47.993 00:21:47.993 real 0m11.381s 00:21:47.993 user 0m4.063s 00:21:47.993 sys 0m5.760s 00:21:47.993 14:58:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:47.993 14:58:30 -- common/autotest_common.sh@10 -- # set +x 00:21:47.993 ************************************ 00:21:47.993 END TEST nvmf_async_init 00:21:47.993 ************************************ 00:21:47.993 14:58:30 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:47.993 14:58:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:47.993 14:58:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:47.993 14:58:30 -- common/autotest_common.sh@10 -- # set +x 00:21:47.993 ************************************ 00:21:47.993 START TEST dma 00:21:47.993 ************************************ 00:21:47.993 14:58:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:47.993 * Looking for test storage... 00:21:47.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:47.993 14:58:30 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:47.993 14:58:30 -- nvmf/common.sh@7 -- # uname -s 00:21:47.993 14:58:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.993 14:58:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.993 14:58:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.993 14:58:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.993 14:58:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.993 14:58:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.993 14:58:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.993 14:58:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.993 14:58:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.993 14:58:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.993 14:58:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:47.993 14:58:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:47.993 14:58:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.993 14:58:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.993 14:58:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:47.993 14:58:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.993 14:58:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:47.993 14:58:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.993 14:58:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.993 14:58:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.993 14:58:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.993 14:58:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.993 14:58:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.993 14:58:30 -- paths/export.sh@5 -- # export PATH 00:21:47.993 14:58:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.993 14:58:30 -- nvmf/common.sh@47 -- # : 0 00:21:47.993 14:58:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:47.993 14:58:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:47.993 14:58:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.993 14:58:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.993 14:58:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.993 14:58:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:47.993 14:58:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:47.993 14:58:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:47.993 14:58:30 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:47.993 14:58:30 -- host/dma.sh@13 -- # exit 0 00:21:47.993 00:21:47.993 real 0m0.134s 00:21:47.993 user 0m0.066s 00:21:47.993 sys 0m0.076s 00:21:47.993 14:58:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:47.993 14:58:30 -- common/autotest_common.sh@10 -- # set +x 00:21:47.993 ************************************ 00:21:47.993 END TEST dma 00:21:47.993 ************************************ 00:21:47.993 14:58:30 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:47.993 14:58:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:47.993 14:58:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:47.993 14:58:30 -- common/autotest_common.sh@10 -- # set +x 00:21:47.993 ************************************ 00:21:47.993 START TEST nvmf_identify 00:21:47.993 ************************************ 00:21:47.993 14:58:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:48.258 * Looking for test storage... 00:21:48.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:48.258 14:58:30 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:48.258 14:58:30 -- nvmf/common.sh@7 -- # uname -s 00:21:48.258 14:58:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:48.258 14:58:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:48.258 14:58:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:48.258 14:58:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:48.258 14:58:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:48.258 14:58:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:48.258 14:58:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:48.258 14:58:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:48.258 14:58:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:48.258 14:58:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:48.258 14:58:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:48.258 14:58:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:48.258 14:58:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:48.258 14:58:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:48.258 14:58:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:48.258 14:58:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:48.258 14:58:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:48.258 14:58:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:48.258 14:58:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:48.258 14:58:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:48.258 14:58:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.258 14:58:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.258 14:58:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.258 14:58:30 -- paths/export.sh@5 -- # export PATH 00:21:48.258 14:58:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.258 14:58:30 -- nvmf/common.sh@47 -- # : 0 00:21:48.258 14:58:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:48.258 14:58:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:48.258 14:58:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:48.258 14:58:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:48.258 14:58:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:48.258 14:58:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:48.258 14:58:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:48.258 14:58:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:48.258 14:58:30 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:48.258 14:58:30 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:48.258 14:58:30 -- host/identify.sh@14 -- # nvmftestinit 00:21:48.258 14:58:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:48.258 14:58:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.258 14:58:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:48.258 14:58:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:48.258 14:58:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:48.258 14:58:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.258 14:58:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:48.258 14:58:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.258 14:58:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:48.258 14:58:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:48.258 14:58:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:48.258 14:58:30 -- common/autotest_common.sh@10 -- # set +x 00:21:54.866 14:58:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:54.866 14:58:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:54.866 14:58:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:54.866 14:58:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:54.866 14:58:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:54.866 14:58:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:54.866 14:58:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:54.866 14:58:37 -- nvmf/common.sh@295 -- # net_devs=() 00:21:54.866 14:58:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:54.866 14:58:37 -- nvmf/common.sh@296 -- # e810=() 00:21:54.866 14:58:37 -- nvmf/common.sh@296 -- # local -ga e810 00:21:54.866 14:58:37 -- nvmf/common.sh@297 -- # x722=() 00:21:54.866 14:58:37 -- nvmf/common.sh@297 -- # local -ga x722 00:21:54.866 14:58:37 -- nvmf/common.sh@298 -- # mlx=() 00:21:54.866 14:58:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:54.866 14:58:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:54.866 14:58:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:54.866 14:58:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:54.866 14:58:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:54.866 14:58:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:54.866 14:58:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:54.866 14:58:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:54.866 14:58:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:54.866 14:58:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:54.866 14:58:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:54.866 14:58:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:54.866 14:58:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:54.866 14:58:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:54.866 14:58:37 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:54.866 14:58:37 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:54.866 14:58:37 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:54.866 14:58:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:54.866 14:58:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:54.866 14:58:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:54.866 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:54.866 14:58:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:54.866 14:58:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:54.866 14:58:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.866 14:58:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.866 14:58:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:54.866 14:58:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:54.866 14:58:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:54.866 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:54.866 14:58:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:54.866 14:58:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:54.866 14:58:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.866 14:58:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.866 14:58:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:54.866 14:58:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:54.866 14:58:37 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:54.866 14:58:37 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:54.867 14:58:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:54.867 14:58:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.867 14:58:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:54.867 14:58:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.867 14:58:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:54.867 Found net devices under 0000:31:00.0: cvl_0_0 00:21:54.867 14:58:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.867 14:58:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:54.867 14:58:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.867 14:58:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:54.867 14:58:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.867 14:58:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:54.867 Found net devices under 0000:31:00.1: cvl_0_1 00:21:54.867 14:58:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.867 14:58:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:54.867 14:58:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:54.867 14:58:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:54.867 14:58:37 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:54.867 14:58:37 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:54.867 14:58:37 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:54.867 14:58:37 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:54.867 14:58:37 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:54.867 14:58:37 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:54.867 14:58:37 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:54.867 14:58:37 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:54.867 14:58:37 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:54.867 14:58:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:54.867 14:58:37 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:54.867 14:58:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:54.867 14:58:37 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:54.867 14:58:37 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:54.867 14:58:37 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:55.125 14:58:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:55.125 14:58:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:55.125 14:58:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:55.125 14:58:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:55.125 14:58:37 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:55.125 14:58:37 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:55.125 14:58:37 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:55.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:55.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:21:55.126 00:21:55.126 --- 10.0.0.2 ping statistics --- 00:21:55.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.126 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:21:55.126 14:58:37 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:55.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:55.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:21:55.126 00:21:55.126 --- 10.0.0.1 ping statistics --- 00:21:55.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.126 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:21:55.126 14:58:37 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:55.126 14:58:37 -- nvmf/common.sh@411 -- # return 0 00:21:55.126 14:58:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:55.126 14:58:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:55.126 14:58:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:55.126 14:58:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:55.126 14:58:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:55.126 14:58:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:55.126 14:58:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:55.126 14:58:37 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:55.126 14:58:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:55.126 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:21:55.385 14:58:37 -- host/identify.sh@19 -- # nvmfpid=1145968 00:21:55.385 14:58:37 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:55.385 14:58:37 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:55.385 14:58:37 -- host/identify.sh@23 -- # waitforlisten 1145968 00:21:55.385 14:58:37 -- common/autotest_common.sh@817 -- # '[' -z 1145968 ']' 00:21:55.385 14:58:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.385 14:58:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:55.385 14:58:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.385 14:58:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:55.385 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:21:55.385 [2024-04-26 14:58:37.846506] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:55.385 [2024-04-26 14:58:37.846570] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.385 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.385 [2024-04-26 14:58:37.919271] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:55.385 [2024-04-26 14:58:37.994404] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.385 [2024-04-26 14:58:37.994442] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.385 [2024-04-26 14:58:37.994451] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.385 [2024-04-26 14:58:37.994458] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.385 [2024-04-26 14:58:37.994466] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.385 [2024-04-26 14:58:37.994633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.385 [2024-04-26 14:58:37.994767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:55.385 [2024-04-26 14:58:37.994917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.385 [2024-04-26 14:58:37.994917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:55.955 14:58:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:55.955 14:58:38 -- common/autotest_common.sh@850 -- # return 0 00:21:55.955 14:58:38 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:55.955 14:58:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.955 14:58:38 -- common/autotest_common.sh@10 -- # set +x 00:21:56.217 [2024-04-26 14:58:38.623278] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.217 14:58:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.217 14:58:38 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:56.217 14:58:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:56.217 14:58:38 -- common/autotest_common.sh@10 -- # set +x 00:21:56.217 14:58:38 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:56.217 14:58:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.217 14:58:38 -- common/autotest_common.sh@10 -- # set +x 00:21:56.217 Malloc0 00:21:56.217 14:58:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.217 14:58:38 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:56.217 14:58:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.217 14:58:38 -- common/autotest_common.sh@10 -- # set +x 00:21:56.217 14:58:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.217 14:58:38 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:56.217 14:58:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.217 14:58:38 -- common/autotest_common.sh@10 -- # set +x 00:21:56.217 14:58:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.217 14:58:38 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:56.217 14:58:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.217 14:58:38 -- common/autotest_common.sh@10 -- # set +x 00:21:56.217 [2024-04-26 14:58:38.706761] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:56.217 14:58:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.217 14:58:38 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:56.217 14:58:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.217 14:58:38 -- common/autotest_common.sh@10 -- # set +x 00:21:56.217 14:58:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.217 14:58:38 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:56.217 14:58:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.217 14:58:38 -- common/autotest_common.sh@10 -- # set +x 00:21:56.217 [2024-04-26 14:58:38.722586] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:56.217 [ 00:21:56.217 { 00:21:56.217 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:56.217 "subtype": "Discovery", 00:21:56.217 "listen_addresses": [ 00:21:56.217 { 00:21:56.217 "transport": "TCP", 00:21:56.217 "trtype": "TCP", 00:21:56.217 "adrfam": "IPv4", 00:21:56.217 "traddr": "10.0.0.2", 00:21:56.217 "trsvcid": "4420" 00:21:56.217 } 00:21:56.217 ], 00:21:56.217 "allow_any_host": true, 00:21:56.217 "hosts": [] 00:21:56.217 }, 00:21:56.217 { 00:21:56.217 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:56.217 "subtype": "NVMe", 00:21:56.217 "listen_addresses": [ 00:21:56.217 { 00:21:56.217 "transport": "TCP", 00:21:56.217 "trtype": "TCP", 00:21:56.217 "adrfam": "IPv4", 00:21:56.217 "traddr": "10.0.0.2", 00:21:56.217 "trsvcid": "4420" 00:21:56.217 } 00:21:56.217 ], 00:21:56.217 "allow_any_host": true, 00:21:56.217 "hosts": [], 00:21:56.217 "serial_number": "SPDK00000000000001", 00:21:56.217 "model_number": "SPDK bdev Controller", 00:21:56.217 "max_namespaces": 32, 00:21:56.217 "min_cntlid": 1, 00:21:56.217 "max_cntlid": 65519, 00:21:56.217 "namespaces": [ 00:21:56.217 { 00:21:56.217 "nsid": 1, 00:21:56.217 "bdev_name": "Malloc0", 00:21:56.217 "name": "Malloc0", 00:21:56.217 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:56.217 "eui64": "ABCDEF0123456789", 00:21:56.217 "uuid": "d0b3657e-ae0c-4006-a065-0dbfc5810d1c" 00:21:56.217 } 00:21:56.217 ] 00:21:56.217 } 00:21:56.217 ] 00:21:56.217 14:58:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.217 14:58:38 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:56.217 [2024-04-26 14:58:38.756361] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:56.217 [2024-04-26 14:58:38.756409] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146314 ] 00:21:56.217 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.217 [2024-04-26 14:58:38.787491] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:56.217 [2024-04-26 14:58:38.787534] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:56.217 [2024-04-26 14:58:38.787539] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:56.217 [2024-04-26 14:58:38.787550] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:56.217 [2024-04-26 14:58:38.787557] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:56.217 [2024-04-26 14:58:38.790872] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:56.217 [2024-04-26 14:58:38.790904] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xe0dd10 0 00:21:56.217 [2024-04-26 14:58:38.791168] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:56.217 [2024-04-26 14:58:38.791176] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:56.217 [2024-04-26 14:58:38.791180] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:56.217 [2024-04-26 14:58:38.791183] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:56.217 [2024-04-26 14:58:38.791220] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.217 [2024-04-26 14:58:38.791226] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.217 [2024-04-26 14:58:38.791230] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe0dd10) 00:21:56.218 [2024-04-26 14:58:38.791242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:56.218 [2024-04-26 14:58:38.791255] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75a60, cid 0, qid 0 00:21:56.218 [2024-04-26 14:58:38.798848] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.218 [2024-04-26 14:58:38.798857] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.218 [2024-04-26 14:58:38.798861] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.218 [2024-04-26 14:58:38.798865] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75a60) on tqpair=0xe0dd10 00:21:56.218 [2024-04-26 14:58:38.798874] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:56.218 [2024-04-26 14:58:38.798880] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:56.218 [2024-04-26 14:58:38.798886] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:56.218 [2024-04-26 14:58:38.798898] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.218 [2024-04-26 14:58:38.798902] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.218 [2024-04-26 14:58:38.798905] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe0dd10) 00:21:56.218 [2024-04-26 14:58:38.798913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.218 [2024-04-26 14:58:38.798925] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75a60, cid 0, qid 0 00:21:56.218 [2024-04-26 14:58:38.799135] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.218 [2024-04-26 14:58:38.799141] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.218 [2024-04-26 14:58:38.799144] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.218 [2024-04-26 14:58:38.799148] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75a60) on tqpair=0xe0dd10 00:21:56.218 [2024-04-26 14:58:38.799153] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:56.218 [2024-04-26 14:58:38.799160] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:56.218 [2024-04-26 14:58:38.799167] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.218 [2024-04-26 14:58:38.799170] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.218 [2024-04-26 14:58:38.799174] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe0dd10) 00:21:56.218 [2024-04-26 14:58:38.799180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.218 [2024-04-26 14:58:38.799190] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75a60, cid 0, qid 0 00:21:56.218 [2024-04-26 14:58:38.799257] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.218 [2024-04-26 14:58:38.799263] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.218 [2024-04-26 14:58:38.799267] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.218 [2024-04-26 14:58:38.799270] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75a60) on tqpair=0xe0dd10 00:21:56.218 [2024-04-26 14:58:38.799275] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:56.218 [2024-04-26 14:58:38.799283] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:56.218 [2024-04-26 14:58:38.799292] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.218 [2024-04-26 14:58:38.799295] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.218 [2024-04-26 14:58:38.799299] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe0dd10) 00:21:56.218 [2024-04-26 14:58:38.799306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.218 [2024-04-26 14:58:38.799315] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75a60, cid 0, qid 0 00:21:56.218 [2024-04-26 14:58:38.799383] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.218 [2024-04-26 14:58:38.799389] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.218 [2024-04-26 14:58:38.799392] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.218 [2024-04-26 14:58:38.799396] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75a60) on tqpair=0xe0dd10 00:21:56.218 [2024-04-26 14:58:38.799401] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:56.218 [2024-04-26 14:58:38.799410] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.218 [2024-04-26 14:58:38.799413] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.218 [2024-04-26 14:58:38.799417] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe0dd10) 00:21:56.218 [2024-04-26 14:58:38.799423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.218 [2024-04-26 14:58:38.799433] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75a60, cid 0, qid 0 00:21:56.218 [2024-04-26 14:58:38.799497] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.218 [2024-04-26 14:58:38.799503] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.218 [2024-04-26 14:58:38.799507] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.218 [2024-04-26 14:58:38.799510] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75a60) on tqpair=0xe0dd10 00:21:56.218 [2024-04-26 14:58:38.799515] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:56.218 [2024-04-26 14:58:38.799520] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:56.218 [2024-04-26 14:58:38.799527] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:56.218 [2024-04-26 14:58:38.799632] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:56.218 [2024-04-26 14:58:38.799637] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:56.218 [2024-04-26 14:58:38.799644] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.218 [2024-04-26 14:58:38.799648] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.218 [2024-04-26 14:58:38.799651] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe0dd10) 00:21:56.218 [2024-04-26 14:58:38.799658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.218 [2024-04-26 14:58:38.799668] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75a60, cid 0, qid 0 00:21:56.218 [2024-04-26 14:58:38.799738] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.218 [2024-04-26 14:58:38.799744] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.218 [2024-04-26 14:58:38.799748] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.218 [2024-04-26 14:58:38.799751] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75a60) on tqpair=0xe0dd10 00:21:56.218 [2024-04-26 14:58:38.799756] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:56.218 [2024-04-26 14:58:38.799768] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.218 [2024-04-26 14:58:38.799771] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.218 [2024-04-26 14:58:38.799775] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe0dd10) 00:21:56.218 [2024-04-26 14:58:38.799782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.218 [2024-04-26 14:58:38.799791] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75a60, cid 0, qid 0 00:21:56.218 [2024-04-26 14:58:38.799891] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.218 [2024-04-26 14:58:38.799897] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.218 [2024-04-26 14:58:38.799900] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.218 [2024-04-26 14:58:38.799904] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75a60) on tqpair=0xe0dd10 00:21:56.219 [2024-04-26 14:58:38.799909] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:56.219 [2024-04-26 14:58:38.799913] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:56.219 [2024-04-26 14:58:38.799920] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:56.219 [2024-04-26 14:58:38.799928] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:56.219 [2024-04-26 14:58:38.799938] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.219 [2024-04-26 14:58:38.799941] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe0dd10) 00:21:56.219 [2024-04-26 14:58:38.799948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.219 [2024-04-26 14:58:38.799958] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75a60, cid 0, qid 0 00:21:56.219 [2024-04-26 14:58:38.800045] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:56.219 [2024-04-26 14:58:38.800051] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:56.219 [2024-04-26 14:58:38.800054] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:56.219 [2024-04-26 14:58:38.800058] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe0dd10): datao=0, datal=4096, cccid=0 00:21:56.219 [2024-04-26 14:58:38.800063] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe75a60) on tqpair(0xe0dd10): expected_datao=0, payload_size=4096 00:21:56.219 [2024-04-26 14:58:38.800067] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.219 [2024-04-26 14:58:38.800089] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:56.219 [2024-04-26 14:58:38.800093] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:56.219 [2024-04-26 14:58:38.800132] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.219 [2024-04-26 14:58:38.800138] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.219 [2024-04-26 14:58:38.800142] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.219 [2024-04-26 14:58:38.800145] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75a60) on tqpair=0xe0dd10 00:21:56.219 [2024-04-26 14:58:38.800153] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:56.219 [2024-04-26 14:58:38.800157] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:56.219 [2024-04-26 14:58:38.800162] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:56.219 [2024-04-26 14:58:38.800167] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:56.219 [2024-04-26 14:58:38.800173] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:56.219 [2024-04-26 14:58:38.800178] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:56.219 [2024-04-26 14:58:38.800185] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:56.219 [2024-04-26 14:58:38.800192] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.219 [2024-04-26 14:58:38.800196] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.219 [2024-04-26 14:58:38.800199] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe0dd10) 00:21:56.219 [2024-04-26 14:58:38.800206] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:56.219 [2024-04-26 14:58:38.800216] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75a60, cid 0, qid 0 00:21:56.219 [2024-04-26 14:58:38.800283] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.219 [2024-04-26 14:58:38.800290] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.219 [2024-04-26 14:58:38.800293] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.219 [2024-04-26 14:58:38.800297] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75a60) on tqpair=0xe0dd10 00:21:56.219 [2024-04-26 14:58:38.800304] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.219 [2024-04-26 14:58:38.800307] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.219 [2024-04-26 14:58:38.800311] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe0dd10) 00:21:56.219 [2024-04-26 14:58:38.800317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.219 [2024-04-26 14:58:38.800323] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.219 [2024-04-26 14:58:38.800326] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.219 [2024-04-26 14:58:38.800330] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xe0dd10) 00:21:56.219 [2024-04-26 14:58:38.800336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.219 [2024-04-26 14:58:38.800342] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.219 [2024-04-26 14:58:38.800345] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.219 [2024-04-26 14:58:38.800349] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xe0dd10) 00:21:56.219 [2024-04-26 14:58:38.800354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.219 [2024-04-26 14:58:38.800360] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.219 [2024-04-26 14:58:38.800364] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.219 [2024-04-26 14:58:38.800367] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe0dd10) 00:21:56.219 [2024-04-26 14:58:38.800373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.219 [2024-04-26 14:58:38.800377] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:56.219 [2024-04-26 14:58:38.800387] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:56.219 [2024-04-26 14:58:38.800394] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.219 [2024-04-26 14:58:38.800397] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe0dd10) 00:21:56.219 [2024-04-26 14:58:38.800406] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.219 [2024-04-26 14:58:38.800417] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75a60, cid 0, qid 0 00:21:56.219 [2024-04-26 14:58:38.800422] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75bc0, cid 1, qid 0 00:21:56.219 [2024-04-26 14:58:38.800427] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75d20, cid 2, qid 0 00:21:56.219 [2024-04-26 14:58:38.800432] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75e80, cid 3, qid 0 00:21:56.219 [2024-04-26 14:58:38.800436] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75fe0, cid 4, qid 0 00:21:56.219 [2024-04-26 14:58:38.800554] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.219 [2024-04-26 14:58:38.800560] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.219 [2024-04-26 14:58:38.800563] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.219 [2024-04-26 14:58:38.800567] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75fe0) on tqpair=0xe0dd10 00:21:56.219 [2024-04-26 14:58:38.800571] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:56.219 [2024-04-26 14:58:38.800577] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:56.219 [2024-04-26 14:58:38.800586] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.219 [2024-04-26 14:58:38.800590] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe0dd10) 00:21:56.219 [2024-04-26 14:58:38.800597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.219 [2024-04-26 14:58:38.800606] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75fe0, cid 4, qid 0 00:21:56.219 [2024-04-26 14:58:38.800683] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:56.219 [2024-04-26 14:58:38.800690] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:56.219 [2024-04-26 14:58:38.800693] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:56.219 [2024-04-26 14:58:38.800697] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe0dd10): datao=0, datal=4096, cccid=4 00:21:56.220 [2024-04-26 14:58:38.800701] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe75fe0) on tqpair(0xe0dd10): expected_datao=0, payload_size=4096 00:21:56.220 [2024-04-26 14:58:38.800705] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.220 [2024-04-26 14:58:38.800728] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:56.220 [2024-04-26 14:58:38.800731] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:56.220 [2024-04-26 14:58:38.844844] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.220 [2024-04-26 14:58:38.844855] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.220 [2024-04-26 14:58:38.844858] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.220 [2024-04-26 14:58:38.844862] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75fe0) on tqpair=0xe0dd10 00:21:56.220 [2024-04-26 14:58:38.844874] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:56.220 [2024-04-26 14:58:38.844893] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.220 [2024-04-26 14:58:38.844897] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe0dd10) 00:21:56.220 [2024-04-26 14:58:38.844904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.220 [2024-04-26 14:58:38.844911] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.220 [2024-04-26 14:58:38.844915] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.220 [2024-04-26 14:58:38.844918] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe0dd10) 00:21:56.220 [2024-04-26 14:58:38.844927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.220 [2024-04-26 14:58:38.844943] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75fe0, cid 4, qid 0 00:21:56.220 [2024-04-26 14:58:38.844948] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe76140, cid 5, qid 0 00:21:56.220 [2024-04-26 14:58:38.845161] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:56.220 [2024-04-26 14:58:38.845167] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:56.220 [2024-04-26 14:58:38.845171] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:56.220 [2024-04-26 14:58:38.845175] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe0dd10): datao=0, datal=1024, cccid=4 00:21:56.220 [2024-04-26 14:58:38.845179] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe75fe0) on tqpair(0xe0dd10): expected_datao=0, payload_size=1024 00:21:56.220 [2024-04-26 14:58:38.845183] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.220 [2024-04-26 14:58:38.845190] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:56.220 [2024-04-26 14:58:38.845193] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:56.220 [2024-04-26 14:58:38.845199] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.220 [2024-04-26 14:58:38.845205] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.220 [2024-04-26 14:58:38.845208] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.220 [2024-04-26 14:58:38.845212] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe76140) on tqpair=0xe0dd10 00:21:56.484 [2024-04-26 14:58:38.887042] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.484 [2024-04-26 14:58:38.887059] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.484 [2024-04-26 14:58:38.887063] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.484 [2024-04-26 14:58:38.887067] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75fe0) on tqpair=0xe0dd10 00:21:56.484 [2024-04-26 14:58:38.887080] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.484 [2024-04-26 14:58:38.887084] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe0dd10) 00:21:56.484 [2024-04-26 14:58:38.887092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.484 [2024-04-26 14:58:38.887108] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75fe0, cid 4, qid 0 00:21:56.484 [2024-04-26 14:58:38.887317] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:56.484 [2024-04-26 14:58:38.887324] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:56.484 [2024-04-26 14:58:38.887327] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:56.484 [2024-04-26 14:58:38.887331] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe0dd10): datao=0, datal=3072, cccid=4 00:21:56.484 [2024-04-26 14:58:38.887336] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe75fe0) on tqpair(0xe0dd10): expected_datao=0, payload_size=3072 00:21:56.484 [2024-04-26 14:58:38.887340] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.484 [2024-04-26 14:58:38.887347] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:56.484 [2024-04-26 14:58:38.887350] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:56.484 [2024-04-26 14:58:38.887538] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.484 [2024-04-26 14:58:38.887544] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.484 [2024-04-26 14:58:38.887547] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.484 [2024-04-26 14:58:38.887551] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75fe0) on tqpair=0xe0dd10 00:21:56.484 [2024-04-26 14:58:38.887559] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.484 [2024-04-26 14:58:38.887563] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe0dd10) 00:21:56.484 [2024-04-26 14:58:38.887573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.484 [2024-04-26 14:58:38.887586] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75fe0, cid 4, qid 0 00:21:56.484 [2024-04-26 14:58:38.887842] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:56.484 [2024-04-26 14:58:38.887849] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:56.484 [2024-04-26 14:58:38.887852] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:56.484 [2024-04-26 14:58:38.887856] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe0dd10): datao=0, datal=8, cccid=4 00:21:56.484 [2024-04-26 14:58:38.887860] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe75fe0) on tqpair(0xe0dd10): expected_datao=0, payload_size=8 00:21:56.484 [2024-04-26 14:58:38.887864] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.484 [2024-04-26 14:58:38.887871] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:56.484 [2024-04-26 14:58:38.887874] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:56.484 [2024-04-26 14:58:38.932845] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.484 [2024-04-26 14:58:38.932853] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.484 [2024-04-26 14:58:38.932857] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.484 [2024-04-26 14:58:38.932860] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75fe0) on tqpair=0xe0dd10 00:21:56.484 ===================================================== 00:21:56.484 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:56.484 ===================================================== 00:21:56.484 Controller Capabilities/Features 00:21:56.484 ================================ 00:21:56.484 Vendor ID: 0000 00:21:56.485 Subsystem Vendor ID: 0000 00:21:56.485 Serial Number: .................... 00:21:56.485 Model Number: ........................................ 00:21:56.485 Firmware Version: 24.05 00:21:56.485 Recommended Arb Burst: 0 00:21:56.485 IEEE OUI Identifier: 00 00 00 00:21:56.485 Multi-path I/O 00:21:56.485 May have multiple subsystem ports: No 00:21:56.485 May have multiple controllers: No 00:21:56.485 Associated with SR-IOV VF: No 00:21:56.485 Max Data Transfer Size: 131072 00:21:56.485 Max Number of Namespaces: 0 00:21:56.485 Max Number of I/O Queues: 1024 00:21:56.485 NVMe Specification Version (VS): 1.3 00:21:56.485 NVMe Specification Version (Identify): 1.3 00:21:56.485 Maximum Queue Entries: 128 00:21:56.485 Contiguous Queues Required: Yes 00:21:56.485 Arbitration Mechanisms Supported 00:21:56.485 Weighted Round Robin: Not Supported 00:21:56.485 Vendor Specific: Not Supported 00:21:56.485 Reset Timeout: 15000 ms 00:21:56.485 Doorbell Stride: 4 bytes 00:21:56.485 NVM Subsystem Reset: Not Supported 00:21:56.485 Command Sets Supported 00:21:56.485 NVM Command Set: Supported 00:21:56.485 Boot Partition: Not Supported 00:21:56.485 Memory Page Size Minimum: 4096 bytes 00:21:56.485 Memory Page Size Maximum: 4096 bytes 00:21:56.485 Persistent Memory Region: Not Supported 00:21:56.485 Optional Asynchronous Events Supported 00:21:56.485 Namespace Attribute Notices: Not Supported 00:21:56.485 Firmware Activation Notices: Not Supported 00:21:56.485 ANA Change Notices: Not Supported 00:21:56.485 PLE Aggregate Log Change Notices: Not Supported 00:21:56.485 LBA Status Info Alert Notices: Not Supported 00:21:56.485 EGE Aggregate Log Change Notices: Not Supported 00:21:56.485 Normal NVM Subsystem Shutdown event: Not Supported 00:21:56.485 Zone Descriptor Change Notices: Not Supported 00:21:56.485 Discovery Log Change Notices: Supported 00:21:56.485 Controller Attributes 00:21:56.485 128-bit Host Identifier: Not Supported 00:21:56.485 Non-Operational Permissive Mode: Not Supported 00:21:56.485 NVM Sets: Not Supported 00:21:56.485 Read Recovery Levels: Not Supported 00:21:56.485 Endurance Groups: Not Supported 00:21:56.485 Predictable Latency Mode: Not Supported 00:21:56.485 Traffic Based Keep ALive: Not Supported 00:21:56.485 Namespace Granularity: Not Supported 00:21:56.485 SQ Associations: Not Supported 00:21:56.485 UUID List: Not Supported 00:21:56.485 Multi-Domain Subsystem: Not Supported 00:21:56.485 Fixed Capacity Management: Not Supported 00:21:56.485 Variable Capacity Management: Not Supported 00:21:56.485 Delete Endurance Group: Not Supported 00:21:56.485 Delete NVM Set: Not Supported 00:21:56.485 Extended LBA Formats Supported: Not Supported 00:21:56.485 Flexible Data Placement Supported: Not Supported 00:21:56.485 00:21:56.485 Controller Memory Buffer Support 00:21:56.485 ================================ 00:21:56.485 Supported: No 00:21:56.485 00:21:56.485 Persistent Memory Region Support 00:21:56.485 ================================ 00:21:56.485 Supported: No 00:21:56.485 00:21:56.485 Admin Command Set Attributes 00:21:56.485 ============================ 00:21:56.485 Security Send/Receive: Not Supported 00:21:56.485 Format NVM: Not Supported 00:21:56.485 Firmware Activate/Download: Not Supported 00:21:56.485 Namespace Management: Not Supported 00:21:56.485 Device Self-Test: Not Supported 00:21:56.485 Directives: Not Supported 00:21:56.485 NVMe-MI: Not Supported 00:21:56.485 Virtualization Management: Not Supported 00:21:56.485 Doorbell Buffer Config: Not Supported 00:21:56.485 Get LBA Status Capability: Not Supported 00:21:56.485 Command & Feature Lockdown Capability: Not Supported 00:21:56.485 Abort Command Limit: 1 00:21:56.485 Async Event Request Limit: 4 00:21:56.485 Number of Firmware Slots: N/A 00:21:56.485 Firmware Slot 1 Read-Only: N/A 00:21:56.485 Firmware Activation Without Reset: N/A 00:21:56.485 Multiple Update Detection Support: N/A 00:21:56.485 Firmware Update Granularity: No Information Provided 00:21:56.485 Per-Namespace SMART Log: No 00:21:56.485 Asymmetric Namespace Access Log Page: Not Supported 00:21:56.485 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:56.485 Command Effects Log Page: Not Supported 00:21:56.485 Get Log Page Extended Data: Supported 00:21:56.485 Telemetry Log Pages: Not Supported 00:21:56.485 Persistent Event Log Pages: Not Supported 00:21:56.485 Supported Log Pages Log Page: May Support 00:21:56.485 Commands Supported & Effects Log Page: Not Supported 00:21:56.485 Feature Identifiers & Effects Log Page:May Support 00:21:56.485 NVMe-MI Commands & Effects Log Page: May Support 00:21:56.485 Data Area 4 for Telemetry Log: Not Supported 00:21:56.485 Error Log Page Entries Supported: 128 00:21:56.485 Keep Alive: Not Supported 00:21:56.485 00:21:56.485 NVM Command Set Attributes 00:21:56.485 ========================== 00:21:56.485 Submission Queue Entry Size 00:21:56.485 Max: 1 00:21:56.485 Min: 1 00:21:56.485 Completion Queue Entry Size 00:21:56.485 Max: 1 00:21:56.485 Min: 1 00:21:56.485 Number of Namespaces: 0 00:21:56.485 Compare Command: Not Supported 00:21:56.485 Write Uncorrectable Command: Not Supported 00:21:56.485 Dataset Management Command: Not Supported 00:21:56.485 Write Zeroes Command: Not Supported 00:21:56.485 Set Features Save Field: Not Supported 00:21:56.485 Reservations: Not Supported 00:21:56.485 Timestamp: Not Supported 00:21:56.485 Copy: Not Supported 00:21:56.485 Volatile Write Cache: Not Present 00:21:56.485 Atomic Write Unit (Normal): 1 00:21:56.485 Atomic Write Unit (PFail): 1 00:21:56.485 Atomic Compare & Write Unit: 1 00:21:56.485 Fused Compare & Write: Supported 00:21:56.485 Scatter-Gather List 00:21:56.485 SGL Command Set: Supported 00:21:56.485 SGL Keyed: Supported 00:21:56.485 SGL Bit Bucket Descriptor: Not Supported 00:21:56.485 SGL Metadata Pointer: Not Supported 00:21:56.485 Oversized SGL: Not Supported 00:21:56.485 SGL Metadata Address: Not Supported 00:21:56.485 SGL Offset: Supported 00:21:56.485 Transport SGL Data Block: Not Supported 00:21:56.485 Replay Protected Memory Block: Not Supported 00:21:56.485 00:21:56.485 Firmware Slot Information 00:21:56.485 ========================= 00:21:56.485 Active slot: 0 00:21:56.485 00:21:56.485 00:21:56.485 Error Log 00:21:56.485 ========= 00:21:56.485 00:21:56.485 Active Namespaces 00:21:56.485 ================= 00:21:56.485 Discovery Log Page 00:21:56.485 ================== 00:21:56.485 Generation Counter: 2 00:21:56.485 Number of Records: 2 00:21:56.485 Record Format: 0 00:21:56.485 00:21:56.485 Discovery Log Entry 0 00:21:56.485 ---------------------- 00:21:56.485 Transport Type: 3 (TCP) 00:21:56.485 Address Family: 1 (IPv4) 00:21:56.485 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:56.485 Entry Flags: 00:21:56.485 Duplicate Returned Information: 1 00:21:56.485 Explicit Persistent Connection Support for Discovery: 1 00:21:56.485 Transport Requirements: 00:21:56.485 Secure Channel: Not Required 00:21:56.485 Port ID: 0 (0x0000) 00:21:56.485 Controller ID: 65535 (0xffff) 00:21:56.485 Admin Max SQ Size: 128 00:21:56.485 Transport Service Identifier: 4420 00:21:56.486 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:56.486 Transport Address: 10.0.0.2 00:21:56.486 Discovery Log Entry 1 00:21:56.486 ---------------------- 00:21:56.486 Transport Type: 3 (TCP) 00:21:56.486 Address Family: 1 (IPv4) 00:21:56.486 Subsystem Type: 2 (NVM Subsystem) 00:21:56.486 Entry Flags: 00:21:56.486 Duplicate Returned Information: 0 00:21:56.486 Explicit Persistent Connection Support for Discovery: 0 00:21:56.486 Transport Requirements: 00:21:56.486 Secure Channel: Not Required 00:21:56.486 Port ID: 0 (0x0000) 00:21:56.486 Controller ID: 65535 (0xffff) 00:21:56.486 Admin Max SQ Size: 128 00:21:56.486 Transport Service Identifier: 4420 00:21:56.486 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:56.486 Transport Address: 10.0.0.2 [2024-04-26 14:58:38.932949] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:56.486 [2024-04-26 14:58:38.932961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.486 [2024-04-26 14:58:38.932968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.486 [2024-04-26 14:58:38.932974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.486 [2024-04-26 14:58:38.932980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.486 [2024-04-26 14:58:38.932988] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.486 [2024-04-26 14:58:38.932992] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.486 [2024-04-26 14:58:38.932995] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe0dd10) 00:21:56.486 [2024-04-26 14:58:38.933002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.486 [2024-04-26 14:58:38.933015] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75e80, cid 3, qid 0 00:21:56.486 [2024-04-26 14:58:38.933236] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.486 [2024-04-26 14:58:38.933242] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.486 [2024-04-26 14:58:38.933245] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.486 [2024-04-26 14:58:38.933249] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75e80) on tqpair=0xe0dd10 00:21:56.486 [2024-04-26 14:58:38.933256] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.486 [2024-04-26 14:58:38.933260] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.486 [2024-04-26 14:58:38.933263] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe0dd10) 00:21:56.486 [2024-04-26 14:58:38.933270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.486 [2024-04-26 14:58:38.933282] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75e80, cid 3, qid 0 00:21:56.486 [2024-04-26 14:58:38.933484] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.486 [2024-04-26 14:58:38.933492] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.486 [2024-04-26 14:58:38.933495] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.486 [2024-04-26 14:58:38.933499] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75e80) on tqpair=0xe0dd10 00:21:56.486 [2024-04-26 14:58:38.933504] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:56.486 [2024-04-26 14:58:38.933508] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:56.486 [2024-04-26 14:58:38.933517] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.486 [2024-04-26 14:58:38.933521] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.486 [2024-04-26 14:58:38.933524] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe0dd10) 00:21:56.486 [2024-04-26 14:58:38.933531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.486 [2024-04-26 14:58:38.933541] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75e80, cid 3, qid 0 00:21:56.486 [2024-04-26 14:58:38.933744] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.486 [2024-04-26 14:58:38.933750] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.486 [2024-04-26 14:58:38.933754] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.486 [2024-04-26 14:58:38.933757] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75e80) on tqpair=0xe0dd10 00:21:56.486 [2024-04-26 14:58:38.933767] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.486 [2024-04-26 14:58:38.933771] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.486 [2024-04-26 14:58:38.933774] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe0dd10) 00:21:56.486 [2024-04-26 14:58:38.933781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.486 [2024-04-26 14:58:38.933790] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75e80, cid 3, qid 0 00:21:56.486 [2024-04-26 14:58:38.934011] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.486 [2024-04-26 14:58:38.934017] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.486 [2024-04-26 14:58:38.934021] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.486 [2024-04-26 14:58:38.934024] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75e80) on tqpair=0xe0dd10 00:21:56.486 [2024-04-26 14:58:38.934034] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.486 [2024-04-26 14:58:38.934038] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.486 [2024-04-26 14:58:38.934041] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe0dd10) 00:21:56.486 [2024-04-26 14:58:38.934048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.486 [2024-04-26 14:58:38.934058] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75e80, cid 3, qid 0 00:21:56.486 [2024-04-26 14:58:38.934232] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.486 [2024-04-26 14:58:38.934238] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.486 [2024-04-26 14:58:38.934242] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.486 [2024-04-26 14:58:38.934245] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75e80) on tqpair=0xe0dd10 00:21:56.486 [2024-04-26 14:58:38.934255] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.486 [2024-04-26 14:58:38.934258] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.486 [2024-04-26 14:58:38.934262] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe0dd10) 00:21:56.486 [2024-04-26 14:58:38.934269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.486 [2024-04-26 14:58:38.934280] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75e80, cid 3, qid 0 00:21:56.486 [2024-04-26 14:58:38.934482] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.486 [2024-04-26 14:58:38.934488] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.486 [2024-04-26 14:58:38.934491] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.486 [2024-04-26 14:58:38.934495] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75e80) on tqpair=0xe0dd10 00:21:56.486 [2024-04-26 14:58:38.934504] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.486 [2024-04-26 14:58:38.934508] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.486 [2024-04-26 14:58:38.934512] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe0dd10) 00:21:56.486 [2024-04-26 14:58:38.934518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.486 [2024-04-26 14:58:38.934528] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75e80, cid 3, qid 0 00:21:56.486 [2024-04-26 14:58:38.934699] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.486 [2024-04-26 14:58:38.934705] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.486 [2024-04-26 14:58:38.934709] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.486 [2024-04-26 14:58:38.934712] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75e80) on tqpair=0xe0dd10 00:21:56.486 [2024-04-26 14:58:38.934722] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.934726] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.934729] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe0dd10) 00:21:56.487 [2024-04-26 14:58:38.934736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.487 [2024-04-26 14:58:38.934745] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75e80, cid 3, qid 0 00:21:56.487 [2024-04-26 14:58:38.934917] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.487 [2024-04-26 14:58:38.934923] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.487 [2024-04-26 14:58:38.934926] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.934930] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75e80) on tqpair=0xe0dd10 00:21:56.487 [2024-04-26 14:58:38.934939] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.934943] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.934947] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe0dd10) 00:21:56.487 [2024-04-26 14:58:38.934953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.487 [2024-04-26 14:58:38.934963] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75e80, cid 3, qid 0 00:21:56.487 [2024-04-26 14:58:38.935171] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.487 [2024-04-26 14:58:38.935177] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.487 [2024-04-26 14:58:38.935180] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.935184] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75e80) on tqpair=0xe0dd10 00:21:56.487 [2024-04-26 14:58:38.935193] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.935197] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.935200] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe0dd10) 00:21:56.487 [2024-04-26 14:58:38.935207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.487 [2024-04-26 14:58:38.935216] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75e80, cid 3, qid 0 00:21:56.487 [2024-04-26 14:58:38.935404] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.487 [2024-04-26 14:58:38.935410] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.487 [2024-04-26 14:58:38.935414] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.935417] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75e80) on tqpair=0xe0dd10 00:21:56.487 [2024-04-26 14:58:38.935427] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.935431] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.935434] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe0dd10) 00:21:56.487 [2024-04-26 14:58:38.935441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.487 [2024-04-26 14:58:38.935450] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75e80, cid 3, qid 0 00:21:56.487 [2024-04-26 14:58:38.935673] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.487 [2024-04-26 14:58:38.935679] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.487 [2024-04-26 14:58:38.935682] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.935686] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75e80) on tqpair=0xe0dd10 00:21:56.487 [2024-04-26 14:58:38.935695] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.935699] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.935702] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe0dd10) 00:21:56.487 [2024-04-26 14:58:38.935709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.487 [2024-04-26 14:58:38.935718] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75e80, cid 3, qid 0 00:21:56.487 [2024-04-26 14:58:38.935914] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.487 [2024-04-26 14:58:38.935920] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.487 [2024-04-26 14:58:38.935924] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.935927] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75e80) on tqpair=0xe0dd10 00:21:56.487 [2024-04-26 14:58:38.935937] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.935940] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.935944] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe0dd10) 00:21:56.487 [2024-04-26 14:58:38.935950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.487 [2024-04-26 14:58:38.935960] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75e80, cid 3, qid 0 00:21:56.487 [2024-04-26 14:58:38.936155] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.487 [2024-04-26 14:58:38.936161] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.487 [2024-04-26 14:58:38.936164] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.936168] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75e80) on tqpair=0xe0dd10 00:21:56.487 [2024-04-26 14:58:38.936177] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.936181] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.936185] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe0dd10) 00:21:56.487 [2024-04-26 14:58:38.936191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.487 [2024-04-26 14:58:38.936201] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75e80, cid 3, qid 0 00:21:56.487 [2024-04-26 14:58:38.936384] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.487 [2024-04-26 14:58:38.936392] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.487 [2024-04-26 14:58:38.936395] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.936399] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75e80) on tqpair=0xe0dd10 00:21:56.487 [2024-04-26 14:58:38.936409] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.936412] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.936416] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe0dd10) 00:21:56.487 [2024-04-26 14:58:38.936423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.487 [2024-04-26 14:58:38.936432] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75e80, cid 3, qid 0 00:21:56.487 [2024-04-26 14:58:38.936615] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.487 [2024-04-26 14:58:38.936621] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.487 [2024-04-26 14:58:38.936624] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.936628] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75e80) on tqpair=0xe0dd10 00:21:56.487 [2024-04-26 14:58:38.936637] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.936641] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.936644] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe0dd10) 00:21:56.487 [2024-04-26 14:58:38.936651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.487 [2024-04-26 14:58:38.936660] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75e80, cid 3, qid 0 00:21:56.487 [2024-04-26 14:58:38.940845] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.487 [2024-04-26 14:58:38.940853] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.487 [2024-04-26 14:58:38.940856] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.940860] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75e80) on tqpair=0xe0dd10 00:21:56.487 [2024-04-26 14:58:38.940870] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.940874] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.487 [2024-04-26 14:58:38.940877] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe0dd10) 00:21:56.487 [2024-04-26 14:58:38.940884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.488 [2024-04-26 14:58:38.940894] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe75e80, cid 3, qid 0 00:21:56.488 [2024-04-26 14:58:38.941075] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.488 [2024-04-26 14:58:38.941081] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.488 [2024-04-26 14:58:38.941085] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.488 [2024-04-26 14:58:38.941088] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe75e80) on tqpair=0xe0dd10 00:21:56.488 [2024-04-26 14:58:38.941095] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:21:56.488 00:21:56.488 14:58:38 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:56.488 [2024-04-26 14:58:38.982620] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:56.488 [2024-04-26 14:58:38.982670] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146316 ] 00:21:56.488 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.488 [2024-04-26 14:58:39.015357] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:56.488 [2024-04-26 14:58:39.015398] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:56.488 [2024-04-26 14:58:39.015404] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:56.488 [2024-04-26 14:58:39.015415] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:56.488 [2024-04-26 14:58:39.015423] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:56.488 [2024-04-26 14:58:39.018869] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:56.488 [2024-04-26 14:58:39.018896] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x233cd10 0 00:21:56.488 [2024-04-26 14:58:39.019121] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:56.488 [2024-04-26 14:58:39.019128] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:56.488 [2024-04-26 14:58:39.019132] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:56.488 [2024-04-26 14:58:39.019135] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:56.488 [2024-04-26 14:58:39.019164] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.488 [2024-04-26 14:58:39.019169] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.488 [2024-04-26 14:58:39.019173] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x233cd10) 00:21:56.488 [2024-04-26 14:58:39.019184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:56.488 [2024-04-26 14:58:39.019196] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a4a60, cid 0, qid 0 00:21:56.488 [2024-04-26 14:58:39.026846] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.488 [2024-04-26 14:58:39.026855] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.488 [2024-04-26 14:58:39.026858] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.488 [2024-04-26 14:58:39.026863] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a4a60) on tqpair=0x233cd10 00:21:56.488 [2024-04-26 14:58:39.026872] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:56.488 [2024-04-26 14:58:39.026878] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:56.488 [2024-04-26 14:58:39.026883] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:56.488 [2024-04-26 14:58:39.026895] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.488 [2024-04-26 14:58:39.026899] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.488 [2024-04-26 14:58:39.026902] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x233cd10) 00:21:56.488 [2024-04-26 14:58:39.026910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.488 [2024-04-26 14:58:39.026922] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a4a60, cid 0, qid 0 00:21:56.488 [2024-04-26 14:58:39.027135] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.488 [2024-04-26 14:58:39.027141] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.488 [2024-04-26 14:58:39.027144] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.488 [2024-04-26 14:58:39.027148] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a4a60) on tqpair=0x233cd10 00:21:56.488 [2024-04-26 14:58:39.027154] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:56.488 [2024-04-26 14:58:39.027164] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:56.488 [2024-04-26 14:58:39.027171] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.488 [2024-04-26 14:58:39.027175] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.488 [2024-04-26 14:58:39.027178] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x233cd10) 00:21:56.488 [2024-04-26 14:58:39.027185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.488 [2024-04-26 14:58:39.027195] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a4a60, cid 0, qid 0 00:21:56.488 [2024-04-26 14:58:39.027252] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.488 [2024-04-26 14:58:39.027258] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.488 [2024-04-26 14:58:39.027262] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.488 [2024-04-26 14:58:39.027265] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a4a60) on tqpair=0x233cd10 00:21:56.488 [2024-04-26 14:58:39.027271] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:56.488 [2024-04-26 14:58:39.027279] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:56.488 [2024-04-26 14:58:39.027285] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.488 [2024-04-26 14:58:39.027289] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.488 [2024-04-26 14:58:39.027292] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x233cd10) 00:21:56.488 [2024-04-26 14:58:39.027299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.488 [2024-04-26 14:58:39.027309] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a4a60, cid 0, qid 0 00:21:56.488 [2024-04-26 14:58:39.027372] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.488 [2024-04-26 14:58:39.027378] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.488 [2024-04-26 14:58:39.027381] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.488 [2024-04-26 14:58:39.027385] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a4a60) on tqpair=0x233cd10 00:21:56.488 [2024-04-26 14:58:39.027391] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:56.488 [2024-04-26 14:58:39.027400] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.488 [2024-04-26 14:58:39.027404] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.488 [2024-04-26 14:58:39.027407] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x233cd10) 00:21:56.488 [2024-04-26 14:58:39.027414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.488 [2024-04-26 14:58:39.027423] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a4a60, cid 0, qid 0 00:21:56.488 [2024-04-26 14:58:39.027480] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.488 [2024-04-26 14:58:39.027487] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.488 [2024-04-26 14:58:39.027490] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.488 [2024-04-26 14:58:39.027494] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a4a60) on tqpair=0x233cd10 00:21:56.488 [2024-04-26 14:58:39.027499] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:56.488 [2024-04-26 14:58:39.027504] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:56.488 [2024-04-26 14:58:39.027511] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:56.488 [2024-04-26 14:58:39.027618] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:56.488 [2024-04-26 14:58:39.027622] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:56.488 [2024-04-26 14:58:39.027629] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.488 [2024-04-26 14:58:39.027633] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.488 [2024-04-26 14:58:39.027636] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x233cd10) 00:21:56.488 [2024-04-26 14:58:39.027643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.488 [2024-04-26 14:58:39.027653] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a4a60, cid 0, qid 0 00:21:56.488 [2024-04-26 14:58:39.027710] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.488 [2024-04-26 14:58:39.027717] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.489 [2024-04-26 14:58:39.027720] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.489 [2024-04-26 14:58:39.027724] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a4a60) on tqpair=0x233cd10 00:21:56.489 [2024-04-26 14:58:39.027729] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:56.489 [2024-04-26 14:58:39.027738] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.489 [2024-04-26 14:58:39.027742] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.489 [2024-04-26 14:58:39.027745] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x233cd10) 00:21:56.489 [2024-04-26 14:58:39.027752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.489 [2024-04-26 14:58:39.027762] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a4a60, cid 0, qid 0 00:21:56.489 [2024-04-26 14:58:39.027819] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.489 [2024-04-26 14:58:39.027825] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.489 [2024-04-26 14:58:39.027828] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.489 [2024-04-26 14:58:39.027832] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a4a60) on tqpair=0x233cd10 00:21:56.489 [2024-04-26 14:58:39.027841] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:56.489 [2024-04-26 14:58:39.027846] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:56.489 [2024-04-26 14:58:39.027853] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:56.489 [2024-04-26 14:58:39.027861] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:56.489 [2024-04-26 14:58:39.027870] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.489 [2024-04-26 14:58:39.027874] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x233cd10) 00:21:56.489 [2024-04-26 14:58:39.027881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.489 [2024-04-26 14:58:39.027892] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a4a60, cid 0, qid 0 00:21:56.489 [2024-04-26 14:58:39.027974] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:56.489 [2024-04-26 14:58:39.027981] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:56.489 [2024-04-26 14:58:39.027984] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:56.489 [2024-04-26 14:58:39.027990] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x233cd10): datao=0, datal=4096, cccid=0 00:21:56.489 [2024-04-26 14:58:39.027994] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23a4a60) on tqpair(0x233cd10): expected_datao=0, payload_size=4096 00:21:56.489 [2024-04-26 14:58:39.027999] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.489 [2024-04-26 14:58:39.028017] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:56.489 [2024-04-26 14:58:39.028021] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:56.489 [2024-04-26 14:58:39.069022] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.489 [2024-04-26 14:58:39.069031] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.489 [2024-04-26 14:58:39.069035] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.489 [2024-04-26 14:58:39.069038] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a4a60) on tqpair=0x233cd10 00:21:56.489 [2024-04-26 14:58:39.069047] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:56.489 [2024-04-26 14:58:39.069052] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:56.489 [2024-04-26 14:58:39.069056] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:56.489 [2024-04-26 14:58:39.069060] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:56.489 [2024-04-26 14:58:39.069064] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:56.489 [2024-04-26 14:58:39.069069] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:56.489 [2024-04-26 14:58:39.069078] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:56.489 [2024-04-26 14:58:39.069085] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.489 [2024-04-26 14:58:39.069089] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.489 [2024-04-26 14:58:39.069092] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x233cd10) 00:21:56.489 [2024-04-26 14:58:39.069099] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:56.489 [2024-04-26 14:58:39.069110] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a4a60, cid 0, qid 0 00:21:56.489 [2024-04-26 14:58:39.069282] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.489 [2024-04-26 14:58:39.069288] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.489 [2024-04-26 14:58:39.069292] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.489 [2024-04-26 14:58:39.069295] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a4a60) on tqpair=0x233cd10 00:21:56.489 [2024-04-26 14:58:39.069303] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.489 [2024-04-26 14:58:39.069307] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.489 [2024-04-26 14:58:39.069310] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x233cd10) 00:21:56.489 [2024-04-26 14:58:39.069316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.489 [2024-04-26 14:58:39.069322] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.489 [2024-04-26 14:58:39.069326] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.489 [2024-04-26 14:58:39.069329] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x233cd10) 00:21:56.489 [2024-04-26 14:58:39.069335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.489 [2024-04-26 14:58:39.069341] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.489 [2024-04-26 14:58:39.069345] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.489 [2024-04-26 14:58:39.069351] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x233cd10) 00:21:56.489 [2024-04-26 14:58:39.069356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.489 [2024-04-26 14:58:39.069363] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.489 [2024-04-26 14:58:39.069366] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.489 [2024-04-26 14:58:39.069369] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x233cd10) 00:21:56.489 [2024-04-26 14:58:39.069375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.489 [2024-04-26 14:58:39.069380] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:56.489 [2024-04-26 14:58:39.069391] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:56.489 [2024-04-26 14:58:39.069397] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.489 [2024-04-26 14:58:39.069401] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x233cd10) 00:21:56.489 [2024-04-26 14:58:39.069408] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.489 [2024-04-26 14:58:39.069419] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a4a60, cid 0, qid 0 00:21:56.489 [2024-04-26 14:58:39.069424] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a4bc0, cid 1, qid 0 00:21:56.489 [2024-04-26 14:58:39.069429] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a4d20, cid 2, qid 0 00:21:56.489 [2024-04-26 14:58:39.069433] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a4e80, cid 3, qid 0 00:21:56.489 [2024-04-26 14:58:39.069438] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a4fe0, cid 4, qid 0 00:21:56.489 [2024-04-26 14:58:39.069624] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.489 [2024-04-26 14:58:39.069631] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.489 [2024-04-26 14:58:39.069634] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.489 [2024-04-26 14:58:39.069638] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a4fe0) on tqpair=0x233cd10 00:21:56.489 [2024-04-26 14:58:39.069643] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:56.489 [2024-04-26 14:58:39.069648] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:56.489 [2024-04-26 14:58:39.069657] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:56.489 [2024-04-26 14:58:39.069663] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:56.489 [2024-04-26 14:58:39.069670] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.490 [2024-04-26 14:58:39.069674] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.490 [2024-04-26 14:58:39.069677] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x233cd10) 00:21:56.490 [2024-04-26 14:58:39.069683] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:56.490 [2024-04-26 14:58:39.069693] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a4fe0, cid 4, qid 0 00:21:56.490 [2024-04-26 14:58:39.069926] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.490 [2024-04-26 14:58:39.069932] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.490 [2024-04-26 14:58:39.069936] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.490 [2024-04-26 14:58:39.069941] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a4fe0) on tqpair=0x233cd10 00:21:56.490 [2024-04-26 14:58:39.069992] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:56.490 [2024-04-26 14:58:39.070001] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:56.490 [2024-04-26 14:58:39.070008] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.490 [2024-04-26 14:58:39.070012] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x233cd10) 00:21:56.490 [2024-04-26 14:58:39.070018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.490 [2024-04-26 14:58:39.070028] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a4fe0, cid 4, qid 0 00:21:56.490 [2024-04-26 14:58:39.070233] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:56.490 [2024-04-26 14:58:39.070239] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:56.490 [2024-04-26 14:58:39.070243] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:56.490 [2024-04-26 14:58:39.070246] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x233cd10): datao=0, datal=4096, cccid=4 00:21:56.490 [2024-04-26 14:58:39.070251] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23a4fe0) on tqpair(0x233cd10): expected_datao=0, payload_size=4096 00:21:56.490 [2024-04-26 14:58:39.070255] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.490 [2024-04-26 14:58:39.070262] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:56.490 [2024-04-26 14:58:39.070265] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:56.490 [2024-04-26 14:58:39.070429] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.490 [2024-04-26 14:58:39.070435] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.490 [2024-04-26 14:58:39.070439] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.490 [2024-04-26 14:58:39.070443] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a4fe0) on tqpair=0x233cd10 00:21:56.490 [2024-04-26 14:58:39.070452] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:56.490 [2024-04-26 14:58:39.070464] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:56.490 [2024-04-26 14:58:39.070473] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:56.490 [2024-04-26 14:58:39.070480] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.490 [2024-04-26 14:58:39.070484] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x233cd10) 00:21:56.490 [2024-04-26 14:58:39.070490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.490 [2024-04-26 14:58:39.070500] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a4fe0, cid 4, qid 0 00:21:56.490 [2024-04-26 14:58:39.070704] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:56.490 [2024-04-26 14:58:39.070711] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:56.490 [2024-04-26 14:58:39.070714] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:56.490 [2024-04-26 14:58:39.070718] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x233cd10): datao=0, datal=4096, cccid=4 00:21:56.490 [2024-04-26 14:58:39.070722] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23a4fe0) on tqpair(0x233cd10): expected_datao=0, payload_size=4096 00:21:56.490 [2024-04-26 14:58:39.070726] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.490 [2024-04-26 14:58:39.070759] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:56.490 [2024-04-26 14:58:39.070763] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:56.490 [2024-04-26 14:58:39.114843] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.490 [2024-04-26 14:58:39.114854] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.490 [2024-04-26 14:58:39.114858] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.490 [2024-04-26 14:58:39.114862] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a4fe0) on tqpair=0x233cd10 00:21:56.490 [2024-04-26 14:58:39.114876] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:56.490 [2024-04-26 14:58:39.114885] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:56.490 [2024-04-26 14:58:39.114893] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.490 [2024-04-26 14:58:39.114897] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x233cd10) 00:21:56.490 [2024-04-26 14:58:39.114904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.490 [2024-04-26 14:58:39.114915] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a4fe0, cid 4, qid 0 00:21:56.490 [2024-04-26 14:58:39.115078] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:56.491 [2024-04-26 14:58:39.115084] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:56.491 [2024-04-26 14:58:39.115088] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:56.491 [2024-04-26 14:58:39.115091] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x233cd10): datao=0, datal=4096, cccid=4 00:21:56.491 [2024-04-26 14:58:39.115095] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23a4fe0) on tqpair(0x233cd10): expected_datao=0, payload_size=4096 00:21:56.491 [2024-04-26 14:58:39.115100] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.491 [2024-04-26 14:58:39.115117] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:56.491 [2024-04-26 14:58:39.115121] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:56.754 [2024-04-26 14:58:39.156067] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.754 [2024-04-26 14:58:39.156077] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.754 [2024-04-26 14:58:39.156080] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.754 [2024-04-26 14:58:39.156084] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a4fe0) on tqpair=0x233cd10 00:21:56.754 [2024-04-26 14:58:39.156093] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:56.754 [2024-04-26 14:58:39.156101] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:56.754 [2024-04-26 14:58:39.156112] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:56.754 [2024-04-26 14:58:39.156118] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:56.754 [2024-04-26 14:58:39.156123] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:56.754 [2024-04-26 14:58:39.156128] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:56.754 [2024-04-26 14:58:39.156132] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:56.754 [2024-04-26 14:58:39.156137] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:56.754 [2024-04-26 14:58:39.156152] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.754 [2024-04-26 14:58:39.156156] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x233cd10) 00:21:56.754 [2024-04-26 14:58:39.156165] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.754 [2024-04-26 14:58:39.156172] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.156175] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.156179] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x233cd10) 00:21:56.755 [2024-04-26 14:58:39.156185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.755 [2024-04-26 14:58:39.156199] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a4fe0, cid 4, qid 0 00:21:56.755 [2024-04-26 14:58:39.156204] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a5140, cid 5, qid 0 00:21:56.755 [2024-04-26 14:58:39.156325] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.755 [2024-04-26 14:58:39.156331] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.755 [2024-04-26 14:58:39.156334] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.156338] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a4fe0) on tqpair=0x233cd10 00:21:56.755 [2024-04-26 14:58:39.156345] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.755 [2024-04-26 14:58:39.156351] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.755 [2024-04-26 14:58:39.156355] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.156358] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a5140) on tqpair=0x233cd10 00:21:56.755 [2024-04-26 14:58:39.156368] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.156372] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x233cd10) 00:21:56.755 [2024-04-26 14:58:39.156378] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.755 [2024-04-26 14:58:39.156387] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a5140, cid 5, qid 0 00:21:56.755 [2024-04-26 14:58:39.156575] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.755 [2024-04-26 14:58:39.156581] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.755 [2024-04-26 14:58:39.156584] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.156588] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a5140) on tqpair=0x233cd10 00:21:56.755 [2024-04-26 14:58:39.156597] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.156601] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x233cd10) 00:21:56.755 [2024-04-26 14:58:39.156607] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.755 [2024-04-26 14:58:39.156616] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a5140, cid 5, qid 0 00:21:56.755 [2024-04-26 14:58:39.156862] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.755 [2024-04-26 14:58:39.156869] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.755 [2024-04-26 14:58:39.156872] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.156876] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a5140) on tqpair=0x233cd10 00:21:56.755 [2024-04-26 14:58:39.156886] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.156889] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x233cd10) 00:21:56.755 [2024-04-26 14:58:39.156896] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.755 [2024-04-26 14:58:39.156905] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a5140, cid 5, qid 0 00:21:56.755 [2024-04-26 14:58:39.157131] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.755 [2024-04-26 14:58:39.157137] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.755 [2024-04-26 14:58:39.157141] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.157144] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a5140) on tqpair=0x233cd10 00:21:56.755 [2024-04-26 14:58:39.157156] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.157160] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x233cd10) 00:21:56.755 [2024-04-26 14:58:39.157166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.755 [2024-04-26 14:58:39.157174] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.157177] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x233cd10) 00:21:56.755 [2024-04-26 14:58:39.157183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.755 [2024-04-26 14:58:39.157190] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.157194] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x233cd10) 00:21:56.755 [2024-04-26 14:58:39.157200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.755 [2024-04-26 14:58:39.157207] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.157211] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x233cd10) 00:21:56.755 [2024-04-26 14:58:39.157217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.755 [2024-04-26 14:58:39.157227] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a5140, cid 5, qid 0 00:21:56.755 [2024-04-26 14:58:39.157232] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a4fe0, cid 4, qid 0 00:21:56.755 [2024-04-26 14:58:39.157237] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a52a0, cid 6, qid 0 00:21:56.755 [2024-04-26 14:58:39.157242] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a5400, cid 7, qid 0 00:21:56.755 [2024-04-26 14:58:39.157460] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:56.755 [2024-04-26 14:58:39.157466] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:56.755 [2024-04-26 14:58:39.157469] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.157473] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x233cd10): datao=0, datal=8192, cccid=5 00:21:56.755 [2024-04-26 14:58:39.157477] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23a5140) on tqpair(0x233cd10): expected_datao=0, payload_size=8192 00:21:56.755 [2024-04-26 14:58:39.157481] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.157568] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.157572] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.157578] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:56.755 [2024-04-26 14:58:39.157583] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:56.755 [2024-04-26 14:58:39.157586] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.157590] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x233cd10): datao=0, datal=512, cccid=4 00:21:56.755 [2024-04-26 14:58:39.157594] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23a4fe0) on tqpair(0x233cd10): expected_datao=0, payload_size=512 00:21:56.755 [2024-04-26 14:58:39.157600] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.157607] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.157610] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.157616] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:56.755 [2024-04-26 14:58:39.157621] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:56.755 [2024-04-26 14:58:39.157625] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.157628] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x233cd10): datao=0, datal=512, cccid=6 00:21:56.755 [2024-04-26 14:58:39.157632] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23a52a0) on tqpair(0x233cd10): expected_datao=0, payload_size=512 00:21:56.755 [2024-04-26 14:58:39.157636] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.157643] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.157646] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.157652] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:56.755 [2024-04-26 14:58:39.157657] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:56.755 [2024-04-26 14:58:39.157660] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.157664] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x233cd10): datao=0, datal=4096, cccid=7 00:21:56.755 [2024-04-26 14:58:39.157668] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23a5400) on tqpair(0x233cd10): expected_datao=0, payload_size=4096 00:21:56.755 [2024-04-26 14:58:39.157672] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.157679] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.157682] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.157699] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.755 [2024-04-26 14:58:39.157706] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.755 [2024-04-26 14:58:39.157709] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.755 [2024-04-26 14:58:39.157713] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a5140) on tqpair=0x233cd10 00:21:56.755 [2024-04-26 14:58:39.157726] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.755 [2024-04-26 14:58:39.157732] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.756 [2024-04-26 14:58:39.157735] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.756 [2024-04-26 14:58:39.157739] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a4fe0) on tqpair=0x233cd10 00:21:56.756 [2024-04-26 14:58:39.157748] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.756 [2024-04-26 14:58:39.157753] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.756 [2024-04-26 14:58:39.157757] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.756 [2024-04-26 14:58:39.157760] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a52a0) on tqpair=0x233cd10 00:21:56.756 [2024-04-26 14:58:39.157768] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.756 [2024-04-26 14:58:39.157774] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.756 [2024-04-26 14:58:39.157777] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.756 [2024-04-26 14:58:39.157781] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a5400) on tqpair=0x233cd10 00:21:56.756 ===================================================== 00:21:56.756 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:56.756 ===================================================== 00:21:56.756 Controller Capabilities/Features 00:21:56.756 ================================ 00:21:56.756 Vendor ID: 8086 00:21:56.756 Subsystem Vendor ID: 8086 00:21:56.756 Serial Number: SPDK00000000000001 00:21:56.756 Model Number: SPDK bdev Controller 00:21:56.756 Firmware Version: 24.05 00:21:56.756 Recommended Arb Burst: 6 00:21:56.756 IEEE OUI Identifier: e4 d2 5c 00:21:56.756 Multi-path I/O 00:21:56.756 May have multiple subsystem ports: Yes 00:21:56.756 May have multiple controllers: Yes 00:21:56.756 Associated with SR-IOV VF: No 00:21:56.756 Max Data Transfer Size: 131072 00:21:56.756 Max Number of Namespaces: 32 00:21:56.756 Max Number of I/O Queues: 127 00:21:56.756 NVMe Specification Version (VS): 1.3 00:21:56.756 NVMe Specification Version (Identify): 1.3 00:21:56.756 Maximum Queue Entries: 128 00:21:56.756 Contiguous Queues Required: Yes 00:21:56.756 Arbitration Mechanisms Supported 00:21:56.756 Weighted Round Robin: Not Supported 00:21:56.756 Vendor Specific: Not Supported 00:21:56.756 Reset Timeout: 15000 ms 00:21:56.756 Doorbell Stride: 4 bytes 00:21:56.756 NVM Subsystem Reset: Not Supported 00:21:56.756 Command Sets Supported 00:21:56.756 NVM Command Set: Supported 00:21:56.756 Boot Partition: Not Supported 00:21:56.756 Memory Page Size Minimum: 4096 bytes 00:21:56.756 Memory Page Size Maximum: 4096 bytes 00:21:56.756 Persistent Memory Region: Not Supported 00:21:56.756 Optional Asynchronous Events Supported 00:21:56.756 Namespace Attribute Notices: Supported 00:21:56.756 Firmware Activation Notices: Not Supported 00:21:56.756 ANA Change Notices: Not Supported 00:21:56.756 PLE Aggregate Log Change Notices: Not Supported 00:21:56.756 LBA Status Info Alert Notices: Not Supported 00:21:56.756 EGE Aggregate Log Change Notices: Not Supported 00:21:56.756 Normal NVM Subsystem Shutdown event: Not Supported 00:21:56.756 Zone Descriptor Change Notices: Not Supported 00:21:56.756 Discovery Log Change Notices: Not Supported 00:21:56.756 Controller Attributes 00:21:56.756 128-bit Host Identifier: Supported 00:21:56.756 Non-Operational Permissive Mode: Not Supported 00:21:56.756 NVM Sets: Not Supported 00:21:56.756 Read Recovery Levels: Not Supported 00:21:56.756 Endurance Groups: Not Supported 00:21:56.756 Predictable Latency Mode: Not Supported 00:21:56.756 Traffic Based Keep ALive: Not Supported 00:21:56.756 Namespace Granularity: Not Supported 00:21:56.756 SQ Associations: Not Supported 00:21:56.756 UUID List: Not Supported 00:21:56.756 Multi-Domain Subsystem: Not Supported 00:21:56.756 Fixed Capacity Management: Not Supported 00:21:56.756 Variable Capacity Management: Not Supported 00:21:56.756 Delete Endurance Group: Not Supported 00:21:56.756 Delete NVM Set: Not Supported 00:21:56.756 Extended LBA Formats Supported: Not Supported 00:21:56.756 Flexible Data Placement Supported: Not Supported 00:21:56.756 00:21:56.756 Controller Memory Buffer Support 00:21:56.756 ================================ 00:21:56.756 Supported: No 00:21:56.756 00:21:56.756 Persistent Memory Region Support 00:21:56.756 ================================ 00:21:56.756 Supported: No 00:21:56.756 00:21:56.756 Admin Command Set Attributes 00:21:56.756 ============================ 00:21:56.756 Security Send/Receive: Not Supported 00:21:56.756 Format NVM: Not Supported 00:21:56.756 Firmware Activate/Download: Not Supported 00:21:56.756 Namespace Management: Not Supported 00:21:56.756 Device Self-Test: Not Supported 00:21:56.756 Directives: Not Supported 00:21:56.756 NVMe-MI: Not Supported 00:21:56.756 Virtualization Management: Not Supported 00:21:56.756 Doorbell Buffer Config: Not Supported 00:21:56.756 Get LBA Status Capability: Not Supported 00:21:56.756 Command & Feature Lockdown Capability: Not Supported 00:21:56.756 Abort Command Limit: 4 00:21:56.756 Async Event Request Limit: 4 00:21:56.756 Number of Firmware Slots: N/A 00:21:56.756 Firmware Slot 1 Read-Only: N/A 00:21:56.756 Firmware Activation Without Reset: N/A 00:21:56.756 Multiple Update Detection Support: N/A 00:21:56.756 Firmware Update Granularity: No Information Provided 00:21:56.756 Per-Namespace SMART Log: No 00:21:56.756 Asymmetric Namespace Access Log Page: Not Supported 00:21:56.756 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:56.756 Command Effects Log Page: Supported 00:21:56.756 Get Log Page Extended Data: Supported 00:21:56.756 Telemetry Log Pages: Not Supported 00:21:56.756 Persistent Event Log Pages: Not Supported 00:21:56.756 Supported Log Pages Log Page: May Support 00:21:56.756 Commands Supported & Effects Log Page: Not Supported 00:21:56.756 Feature Identifiers & Effects Log Page:May Support 00:21:56.756 NVMe-MI Commands & Effects Log Page: May Support 00:21:56.756 Data Area 4 for Telemetry Log: Not Supported 00:21:56.756 Error Log Page Entries Supported: 128 00:21:56.756 Keep Alive: Supported 00:21:56.756 Keep Alive Granularity: 10000 ms 00:21:56.756 00:21:56.756 NVM Command Set Attributes 00:21:56.756 ========================== 00:21:56.756 Submission Queue Entry Size 00:21:56.756 Max: 64 00:21:56.756 Min: 64 00:21:56.756 Completion Queue Entry Size 00:21:56.756 Max: 16 00:21:56.756 Min: 16 00:21:56.756 Number of Namespaces: 32 00:21:56.756 Compare Command: Supported 00:21:56.756 Write Uncorrectable Command: Not Supported 00:21:56.756 Dataset Management Command: Supported 00:21:56.756 Write Zeroes Command: Supported 00:21:56.756 Set Features Save Field: Not Supported 00:21:56.756 Reservations: Supported 00:21:56.756 Timestamp: Not Supported 00:21:56.756 Copy: Supported 00:21:56.756 Volatile Write Cache: Present 00:21:56.756 Atomic Write Unit (Normal): 1 00:21:56.756 Atomic Write Unit (PFail): 1 00:21:56.756 Atomic Compare & Write Unit: 1 00:21:56.756 Fused Compare & Write: Supported 00:21:56.756 Scatter-Gather List 00:21:56.756 SGL Command Set: Supported 00:21:56.756 SGL Keyed: Supported 00:21:56.756 SGL Bit Bucket Descriptor: Not Supported 00:21:56.756 SGL Metadata Pointer: Not Supported 00:21:56.756 Oversized SGL: Not Supported 00:21:56.756 SGL Metadata Address: Not Supported 00:21:56.756 SGL Offset: Supported 00:21:56.756 Transport SGL Data Block: Not Supported 00:21:56.756 Replay Protected Memory Block: Not Supported 00:21:56.756 00:21:56.756 Firmware Slot Information 00:21:56.756 ========================= 00:21:56.756 Active slot: 1 00:21:56.756 Slot 1 Firmware Revision: 24.05 00:21:56.756 00:21:56.756 00:21:56.756 Commands Supported and Effects 00:21:56.756 ============================== 00:21:56.756 Admin Commands 00:21:56.756 -------------- 00:21:56.756 Get Log Page (02h): Supported 00:21:56.756 Identify (06h): Supported 00:21:56.756 Abort (08h): Supported 00:21:56.757 Set Features (09h): Supported 00:21:56.757 Get Features (0Ah): Supported 00:21:56.757 Asynchronous Event Request (0Ch): Supported 00:21:56.757 Keep Alive (18h): Supported 00:21:56.757 I/O Commands 00:21:56.757 ------------ 00:21:56.757 Flush (00h): Supported LBA-Change 00:21:56.757 Write (01h): Supported LBA-Change 00:21:56.757 Read (02h): Supported 00:21:56.757 Compare (05h): Supported 00:21:56.757 Write Zeroes (08h): Supported LBA-Change 00:21:56.757 Dataset Management (09h): Supported LBA-Change 00:21:56.757 Copy (19h): Supported LBA-Change 00:21:56.757 Unknown (79h): Supported LBA-Change 00:21:56.757 Unknown (7Ah): Supported 00:21:56.757 00:21:56.757 Error Log 00:21:56.757 ========= 00:21:56.757 00:21:56.757 Arbitration 00:21:56.757 =========== 00:21:56.757 Arbitration Burst: 1 00:21:56.757 00:21:56.757 Power Management 00:21:56.757 ================ 00:21:56.757 Number of Power States: 1 00:21:56.757 Current Power State: Power State #0 00:21:56.757 Power State #0: 00:21:56.757 Max Power: 0.00 W 00:21:56.757 Non-Operational State: Operational 00:21:56.757 Entry Latency: Not Reported 00:21:56.757 Exit Latency: Not Reported 00:21:56.757 Relative Read Throughput: 0 00:21:56.757 Relative Read Latency: 0 00:21:56.757 Relative Write Throughput: 0 00:21:56.757 Relative Write Latency: 0 00:21:56.757 Idle Power: Not Reported 00:21:56.757 Active Power: Not Reported 00:21:56.757 Non-Operational Permissive Mode: Not Supported 00:21:56.757 00:21:56.757 Health Information 00:21:56.757 ================== 00:21:56.757 Critical Warnings: 00:21:56.757 Available Spare Space: OK 00:21:56.757 Temperature: OK 00:21:56.757 Device Reliability: OK 00:21:56.757 Read Only: No 00:21:56.757 Volatile Memory Backup: OK 00:21:56.757 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:56.757 Temperature Threshold: [2024-04-26 14:58:39.157889] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.757 [2024-04-26 14:58:39.157895] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x233cd10) 00:21:56.757 [2024-04-26 14:58:39.157902] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.757 [2024-04-26 14:58:39.157914] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a5400, cid 7, qid 0 00:21:56.757 [2024-04-26 14:58:39.158138] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.757 [2024-04-26 14:58:39.158144] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.757 [2024-04-26 14:58:39.158148] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.757 [2024-04-26 14:58:39.158151] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a5400) on tqpair=0x233cd10 00:21:56.757 [2024-04-26 14:58:39.158180] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:56.757 [2024-04-26 14:58:39.158191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.757 [2024-04-26 14:58:39.158197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.757 [2024-04-26 14:58:39.158203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.757 [2024-04-26 14:58:39.158209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.757 [2024-04-26 14:58:39.158217] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.757 [2024-04-26 14:58:39.158221] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.757 [2024-04-26 14:58:39.158224] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x233cd10) 00:21:56.757 [2024-04-26 14:58:39.158231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.757 [2024-04-26 14:58:39.158242] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a4e80, cid 3, qid 0 00:21:56.757 [2024-04-26 14:58:39.158439] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.757 [2024-04-26 14:58:39.158445] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.757 [2024-04-26 14:58:39.158448] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.757 [2024-04-26 14:58:39.158452] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a4e80) on tqpair=0x233cd10 00:21:56.757 [2024-04-26 14:58:39.158459] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.757 [2024-04-26 14:58:39.158463] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.757 [2024-04-26 14:58:39.158466] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x233cd10) 00:21:56.757 [2024-04-26 14:58:39.158473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.757 [2024-04-26 14:58:39.158485] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a4e80, cid 3, qid 0 00:21:56.757 [2024-04-26 14:58:39.158650] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.757 [2024-04-26 14:58:39.158656] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.757 [2024-04-26 14:58:39.158659] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.757 [2024-04-26 14:58:39.158663] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a4e80) on tqpair=0x233cd10 00:21:56.757 [2024-04-26 14:58:39.158668] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:56.757 [2024-04-26 14:58:39.158673] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:56.757 [2024-04-26 14:58:39.158682] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.757 [2024-04-26 14:58:39.158686] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.757 [2024-04-26 14:58:39.158689] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x233cd10) 00:21:56.757 [2024-04-26 14:58:39.158696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.757 [2024-04-26 14:58:39.158705] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a4e80, cid 3, qid 0 00:21:56.757 [2024-04-26 14:58:39.162847] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.757 [2024-04-26 14:58:39.162856] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.757 [2024-04-26 14:58:39.162860] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.757 [2024-04-26 14:58:39.162864] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a4e80) on tqpair=0x233cd10 00:21:56.757 [2024-04-26 14:58:39.162874] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:56.757 [2024-04-26 14:58:39.162878] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:56.757 [2024-04-26 14:58:39.162882] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x233cd10) 00:21:56.757 [2024-04-26 14:58:39.162889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.757 [2024-04-26 14:58:39.162901] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a4e80, cid 3, qid 0 00:21:56.757 [2024-04-26 14:58:39.163086] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:56.757 [2024-04-26 14:58:39.163092] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:56.757 [2024-04-26 14:58:39.163095] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:56.757 [2024-04-26 14:58:39.163099] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23a4e80) on tqpair=0x233cd10 00:21:56.757 [2024-04-26 14:58:39.163107] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:21:56.757 0 Kelvin (-273 Celsius) 00:21:56.757 Available Spare: 0% 00:21:56.757 Available Spare Threshold: 0% 00:21:56.757 Life Percentage Used: 0% 00:21:56.757 Data Units Read: 0 00:21:56.757 Data Units Written: 0 00:21:56.757 Host Read Commands: 0 00:21:56.757 Host Write Commands: 0 00:21:56.757 Controller Busy Time: 0 minutes 00:21:56.757 Power Cycles: 0 00:21:56.757 Power On Hours: 0 hours 00:21:56.757 Unsafe Shutdowns: 0 00:21:56.757 Unrecoverable Media Errors: 0 00:21:56.757 Lifetime Error Log Entries: 0 00:21:56.757 Warning Temperature Time: 0 minutes 00:21:56.757 Critical Temperature Time: 0 minutes 00:21:56.757 00:21:56.757 Number of Queues 00:21:56.757 ================ 00:21:56.757 Number of I/O Submission Queues: 127 00:21:56.757 Number of I/O Completion Queues: 127 00:21:56.757 00:21:56.757 Active Namespaces 00:21:56.757 ================= 00:21:56.757 Namespace ID:1 00:21:56.757 Error Recovery Timeout: Unlimited 00:21:56.757 Command Set Identifier: NVM (00h) 00:21:56.757 Deallocate: Supported 00:21:56.758 Deallocated/Unwritten Error: Not Supported 00:21:56.758 Deallocated Read Value: Unknown 00:21:56.758 Deallocate in Write Zeroes: Not Supported 00:21:56.758 Deallocated Guard Field: 0xFFFF 00:21:56.758 Flush: Supported 00:21:56.758 Reservation: Supported 00:21:56.758 Namespace Sharing Capabilities: Multiple Controllers 00:21:56.758 Size (in LBAs): 131072 (0GiB) 00:21:56.758 Capacity (in LBAs): 131072 (0GiB) 00:21:56.758 Utilization (in LBAs): 131072 (0GiB) 00:21:56.758 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:56.758 EUI64: ABCDEF0123456789 00:21:56.758 UUID: d0b3657e-ae0c-4006-a065-0dbfc5810d1c 00:21:56.758 Thin Provisioning: Not Supported 00:21:56.758 Per-NS Atomic Units: Yes 00:21:56.758 Atomic Boundary Size (Normal): 0 00:21:56.758 Atomic Boundary Size (PFail): 0 00:21:56.758 Atomic Boundary Offset: 0 00:21:56.758 Maximum Single Source Range Length: 65535 00:21:56.758 Maximum Copy Length: 65535 00:21:56.758 Maximum Source Range Count: 1 00:21:56.758 NGUID/EUI64 Never Reused: No 00:21:56.758 Namespace Write Protected: No 00:21:56.758 Number of LBA Formats: 1 00:21:56.758 Current LBA Format: LBA Format #00 00:21:56.758 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:56.758 00:21:56.758 14:58:39 -- host/identify.sh@51 -- # sync 00:21:56.758 14:58:39 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:56.758 14:58:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.758 14:58:39 -- common/autotest_common.sh@10 -- # set +x 00:21:56.758 14:58:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.758 14:58:39 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:56.758 14:58:39 -- host/identify.sh@56 -- # nvmftestfini 00:21:56.758 14:58:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:56.758 14:58:39 -- nvmf/common.sh@117 -- # sync 00:21:56.758 14:58:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:56.758 14:58:39 -- nvmf/common.sh@120 -- # set +e 00:21:56.758 14:58:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:56.758 14:58:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:56.758 rmmod nvme_tcp 00:21:56.758 rmmod nvme_fabrics 00:21:56.758 rmmod nvme_keyring 00:21:56.758 14:58:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:56.758 14:58:39 -- nvmf/common.sh@124 -- # set -e 00:21:56.758 14:58:39 -- nvmf/common.sh@125 -- # return 0 00:21:56.758 14:58:39 -- nvmf/common.sh@478 -- # '[' -n 1145968 ']' 00:21:56.758 14:58:39 -- nvmf/common.sh@479 -- # killprocess 1145968 00:21:56.758 14:58:39 -- common/autotest_common.sh@936 -- # '[' -z 1145968 ']' 00:21:56.758 14:58:39 -- common/autotest_common.sh@940 -- # kill -0 1145968 00:21:56.758 14:58:39 -- common/autotest_common.sh@941 -- # uname 00:21:56.758 14:58:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:56.758 14:58:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1145968 00:21:56.758 14:58:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:56.758 14:58:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:56.758 14:58:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1145968' 00:21:56.758 killing process with pid 1145968 00:21:56.758 14:58:39 -- common/autotest_common.sh@955 -- # kill 1145968 00:21:56.758 [2024-04-26 14:58:39.329921] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:56.758 14:58:39 -- common/autotest_common.sh@960 -- # wait 1145968 00:21:57.019 14:58:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:57.019 14:58:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:57.019 14:58:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:57.019 14:58:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:57.019 14:58:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:57.019 14:58:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.019 14:58:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:57.019 14:58:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.930 14:58:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:58.930 00:21:58.930 real 0m10.950s 00:21:58.930 user 0m7.871s 00:21:58.930 sys 0m5.688s 00:21:58.930 14:58:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:58.930 14:58:41 -- common/autotest_common.sh@10 -- # set +x 00:21:58.930 ************************************ 00:21:58.930 END TEST nvmf_identify 00:21:58.930 ************************************ 00:21:58.930 14:58:41 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:58.930 14:58:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:58.930 14:58:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:58.930 14:58:41 -- common/autotest_common.sh@10 -- # set +x 00:21:59.191 ************************************ 00:21:59.191 START TEST nvmf_perf 00:21:59.191 ************************************ 00:21:59.191 14:58:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:59.191 * Looking for test storage... 00:21:59.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:59.451 14:58:41 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:59.451 14:58:41 -- nvmf/common.sh@7 -- # uname -s 00:21:59.451 14:58:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.451 14:58:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.451 14:58:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.451 14:58:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.451 14:58:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.451 14:58:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.451 14:58:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.451 14:58:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.451 14:58:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.451 14:58:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.451 14:58:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:59.451 14:58:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:59.451 14:58:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.451 14:58:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.451 14:58:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:59.451 14:58:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.451 14:58:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:59.451 14:58:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.451 14:58:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.451 14:58:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.451 14:58:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.451 14:58:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.451 14:58:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.451 14:58:41 -- paths/export.sh@5 -- # export PATH 00:21:59.451 14:58:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.451 14:58:41 -- nvmf/common.sh@47 -- # : 0 00:21:59.451 14:58:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:59.451 14:58:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:59.451 14:58:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.451 14:58:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.451 14:58:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.451 14:58:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:59.451 14:58:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:59.451 14:58:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:59.451 14:58:41 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:59.451 14:58:41 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:59.451 14:58:41 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:59.451 14:58:41 -- host/perf.sh@17 -- # nvmftestinit 00:21:59.451 14:58:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:59.451 14:58:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.451 14:58:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:59.451 14:58:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:59.451 14:58:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:59.451 14:58:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.451 14:58:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.451 14:58:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.451 14:58:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:59.451 14:58:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:59.451 14:58:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:59.451 14:58:41 -- common/autotest_common.sh@10 -- # set +x 00:22:07.592 14:58:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:07.592 14:58:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:07.592 14:58:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:07.592 14:58:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:07.592 14:58:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:07.592 14:58:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:07.592 14:58:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:07.592 14:58:48 -- nvmf/common.sh@295 -- # net_devs=() 00:22:07.592 14:58:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:07.592 14:58:48 -- nvmf/common.sh@296 -- # e810=() 00:22:07.592 14:58:48 -- nvmf/common.sh@296 -- # local -ga e810 00:22:07.592 14:58:48 -- nvmf/common.sh@297 -- # x722=() 00:22:07.592 14:58:48 -- nvmf/common.sh@297 -- # local -ga x722 00:22:07.592 14:58:48 -- nvmf/common.sh@298 -- # mlx=() 00:22:07.592 14:58:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:07.592 14:58:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.592 14:58:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.592 14:58:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.592 14:58:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.592 14:58:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.592 14:58:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.592 14:58:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.592 14:58:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.592 14:58:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.592 14:58:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.592 14:58:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.592 14:58:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:07.592 14:58:48 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:07.592 14:58:48 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:07.592 14:58:48 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:07.592 14:58:48 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:07.592 14:58:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:07.592 14:58:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.592 14:58:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:07.592 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:07.592 14:58:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.592 14:58:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.592 14:58:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.592 14:58:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.592 14:58:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.592 14:58:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.592 14:58:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:07.592 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:07.592 14:58:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.592 14:58:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.592 14:58:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.592 14:58:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.592 14:58:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.592 14:58:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:07.592 14:58:48 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:07.592 14:58:48 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:07.592 14:58:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.592 14:58:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.592 14:58:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:07.592 14:58:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.592 14:58:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:07.592 Found net devices under 0000:31:00.0: cvl_0_0 00:22:07.592 14:58:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.592 14:58:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.592 14:58:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.592 14:58:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:07.592 14:58:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.592 14:58:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:07.592 Found net devices under 0000:31:00.1: cvl_0_1 00:22:07.592 14:58:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.592 14:58:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:07.592 14:58:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:07.592 14:58:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:07.592 14:58:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:07.592 14:58:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:07.592 14:58:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:07.592 14:58:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:07.592 14:58:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:07.592 14:58:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:07.592 14:58:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:07.592 14:58:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:07.592 14:58:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:07.592 14:58:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:07.592 14:58:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:07.592 14:58:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:07.592 14:58:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:07.592 14:58:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:07.592 14:58:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:07.592 14:58:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:07.592 14:58:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:07.592 14:58:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:07.592 14:58:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:07.592 14:58:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:07.592 14:58:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:07.592 14:58:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:07.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:07.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.783 ms 00:22:07.592 00:22:07.592 --- 10.0.0.2 ping statistics --- 00:22:07.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.592 rtt min/avg/max/mdev = 0.783/0.783/0.783/0.000 ms 00:22:07.592 14:58:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:07.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:07.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:22:07.592 00:22:07.592 --- 10.0.0.1 ping statistics --- 00:22:07.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.592 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:22:07.592 14:58:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:07.592 14:58:49 -- nvmf/common.sh@411 -- # return 0 00:22:07.592 14:58:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:07.592 14:58:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:07.592 14:58:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:07.592 14:58:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:07.592 14:58:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:07.592 14:58:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:07.592 14:58:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:07.593 14:58:49 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:07.593 14:58:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:07.593 14:58:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:07.593 14:58:49 -- common/autotest_common.sh@10 -- # set +x 00:22:07.593 14:58:49 -- nvmf/common.sh@470 -- # nvmfpid=1150610 00:22:07.593 14:58:49 -- nvmf/common.sh@471 -- # waitforlisten 1150610 00:22:07.593 14:58:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:07.593 14:58:49 -- common/autotest_common.sh@817 -- # '[' -z 1150610 ']' 00:22:07.593 14:58:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.593 14:58:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:07.593 14:58:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.593 14:58:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:07.593 14:58:49 -- common/autotest_common.sh@10 -- # set +x 00:22:07.593 [2024-04-26 14:58:49.207313] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:07.593 [2024-04-26 14:58:49.207378] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.593 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.593 [2024-04-26 14:58:49.279253] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:07.593 [2024-04-26 14:58:49.352725] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:07.593 [2024-04-26 14:58:49.352767] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:07.593 [2024-04-26 14:58:49.352774] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:07.593 [2024-04-26 14:58:49.352782] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:07.593 [2024-04-26 14:58:49.352788] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:07.593 [2024-04-26 14:58:49.352884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.593 [2024-04-26 14:58:49.353093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.593 [2024-04-26 14:58:49.353094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:07.593 [2024-04-26 14:58:49.352950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:07.593 14:58:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:07.593 14:58:49 -- common/autotest_common.sh@850 -- # return 0 00:22:07.593 14:58:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:07.593 14:58:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:07.593 14:58:49 -- common/autotest_common.sh@10 -- # set +x 00:22:07.593 14:58:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.593 14:58:50 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:07.593 14:58:50 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:07.853 14:58:50 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:07.853 14:58:50 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:08.114 14:58:50 -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:22:08.114 14:58:50 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:08.375 14:58:50 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:08.375 14:58:50 -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:22:08.375 14:58:50 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:08.375 14:58:50 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:08.375 14:58:50 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:08.375 [2024-04-26 14:58:50.996370] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:08.375 14:58:51 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:08.636 14:58:51 -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:08.636 14:58:51 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:08.897 14:58:51 -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:08.897 14:58:51 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:08.897 14:58:51 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:09.157 [2024-04-26 14:58:51.670898] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.157 14:58:51 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:09.417 14:58:51 -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:22:09.417 14:58:51 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:22:09.417 14:58:51 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:09.417 14:58:51 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:22:10.801 Initializing NVMe Controllers 00:22:10.801 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:22:10.801 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:22:10.801 Initialization complete. Launching workers. 00:22:10.801 ======================================================== 00:22:10.801 Latency(us) 00:22:10.801 Device Information : IOPS MiB/s Average min max 00:22:10.801 PCIE (0000:65:00.0) NSID 1 from core 0: 80792.29 315.59 395.58 65.74 4374.60 00:22:10.801 ======================================================== 00:22:10.801 Total : 80792.29 315.59 395.58 65.74 4374.60 00:22:10.801 00:22:10.801 14:58:53 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:10.801 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.184 Initializing NVMe Controllers 00:22:12.184 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:12.184 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:12.184 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:12.184 Initialization complete. Launching workers. 00:22:12.184 ======================================================== 00:22:12.184 Latency(us) 00:22:12.184 Device Information : IOPS MiB/s Average min max 00:22:12.184 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 70.00 0.27 14738.98 217.26 44741.86 00:22:12.184 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19717.78 7955.16 47890.38 00:22:12.184 ======================================================== 00:22:12.184 Total : 121.00 0.47 16837.48 217.26 47890.38 00:22:12.184 00:22:12.184 14:58:54 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:12.184 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.123 Initializing NVMe Controllers 00:22:13.123 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:13.123 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:13.123 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:13.123 Initialization complete. Launching workers. 00:22:13.123 ======================================================== 00:22:13.123 Latency(us) 00:22:13.123 Device Information : IOPS MiB/s Average min max 00:22:13.123 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10206.07 39.87 3135.47 501.84 6516.92 00:22:13.123 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3835.65 14.98 8386.80 6886.85 15817.25 00:22:13.123 ======================================================== 00:22:13.123 Total : 14041.72 54.85 4569.93 501.84 15817.25 00:22:13.123 00:22:13.123 14:58:55 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:13.123 14:58:55 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:13.123 14:58:55 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:13.123 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.665 Initializing NVMe Controllers 00:22:15.665 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:15.665 Controller IO queue size 128, less than required. 00:22:15.665 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:15.665 Controller IO queue size 128, less than required. 00:22:15.665 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:15.665 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:15.665 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:15.665 Initialization complete. Launching workers. 00:22:15.665 ======================================================== 00:22:15.665 Latency(us) 00:22:15.665 Device Information : IOPS MiB/s Average min max 00:22:15.665 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1449.99 362.50 90246.80 57170.20 146095.09 00:22:15.665 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 574.50 143.62 228724.10 63504.38 356070.05 00:22:15.665 ======================================================== 00:22:15.665 Total : 2024.49 506.12 129543.02 57170.20 356070.05 00:22:15.665 00:22:15.665 14:58:57 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:15.665 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.665 No valid NVMe controllers or AIO or URING devices found 00:22:15.665 Initializing NVMe Controllers 00:22:15.665 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:15.665 Controller IO queue size 128, less than required. 00:22:15.665 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:15.665 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:15.665 Controller IO queue size 128, less than required. 00:22:15.665 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:15.665 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:15.665 WARNING: Some requested NVMe devices were skipped 00:22:15.665 14:58:58 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:15.665 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.209 Initializing NVMe Controllers 00:22:18.209 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:18.209 Controller IO queue size 128, less than required. 00:22:18.209 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:18.209 Controller IO queue size 128, less than required. 00:22:18.209 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:18.209 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:18.209 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:18.209 Initialization complete. Launching workers. 00:22:18.209 00:22:18.209 ==================== 00:22:18.209 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:18.209 TCP transport: 00:22:18.209 polls: 25223 00:22:18.209 idle_polls: 12633 00:22:18.209 sock_completions: 12590 00:22:18.209 nvme_completions: 6243 00:22:18.209 submitted_requests: 9308 00:22:18.209 queued_requests: 1 00:22:18.209 00:22:18.209 ==================== 00:22:18.209 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:18.209 TCP transport: 00:22:18.209 polls: 25182 00:22:18.209 idle_polls: 12291 00:22:18.209 sock_completions: 12891 00:22:18.209 nvme_completions: 6051 00:22:18.209 submitted_requests: 9070 00:22:18.209 queued_requests: 1 00:22:18.209 ======================================================== 00:22:18.209 Latency(us) 00:22:18.209 Device Information : IOPS MiB/s Average min max 00:22:18.209 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1560.50 390.12 84406.15 42315.53 124155.44 00:22:18.209 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1512.50 378.12 85753.45 44931.72 116650.40 00:22:18.209 ======================================================== 00:22:18.209 Total : 3073.00 768.25 85069.28 42315.53 124155.44 00:22:18.209 00:22:18.209 14:59:00 -- host/perf.sh@66 -- # sync 00:22:18.209 14:59:00 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:18.470 14:59:00 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:18.470 14:59:00 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:18.470 14:59:00 -- host/perf.sh@114 -- # nvmftestfini 00:22:18.470 14:59:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:18.470 14:59:00 -- nvmf/common.sh@117 -- # sync 00:22:18.470 14:59:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:18.470 14:59:00 -- nvmf/common.sh@120 -- # set +e 00:22:18.470 14:59:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:18.470 14:59:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:18.470 rmmod nvme_tcp 00:22:18.470 rmmod nvme_fabrics 00:22:18.470 rmmod nvme_keyring 00:22:18.470 14:59:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:18.470 14:59:01 -- nvmf/common.sh@124 -- # set -e 00:22:18.470 14:59:01 -- nvmf/common.sh@125 -- # return 0 00:22:18.470 14:59:01 -- nvmf/common.sh@478 -- # '[' -n 1150610 ']' 00:22:18.470 14:59:01 -- nvmf/common.sh@479 -- # killprocess 1150610 00:22:18.470 14:59:01 -- common/autotest_common.sh@936 -- # '[' -z 1150610 ']' 00:22:18.470 14:59:01 -- common/autotest_common.sh@940 -- # kill -0 1150610 00:22:18.470 14:59:01 -- common/autotest_common.sh@941 -- # uname 00:22:18.470 14:59:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:18.470 14:59:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1150610 00:22:18.470 14:59:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:18.470 14:59:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:18.470 14:59:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1150610' 00:22:18.470 killing process with pid 1150610 00:22:18.470 14:59:01 -- common/autotest_common.sh@955 -- # kill 1150610 00:22:18.470 14:59:01 -- common/autotest_common.sh@960 -- # wait 1150610 00:22:20.384 14:59:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:20.384 14:59:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:20.384 14:59:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:20.384 14:59:03 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:20.384 14:59:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:20.384 14:59:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.384 14:59:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:20.384 14:59:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.932 14:59:05 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:22.932 00:22:22.932 real 0m23.366s 00:22:22.932 user 0m56.204s 00:22:22.932 sys 0m7.927s 00:22:22.932 14:59:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:22.932 14:59:05 -- common/autotest_common.sh@10 -- # set +x 00:22:22.932 ************************************ 00:22:22.932 END TEST nvmf_perf 00:22:22.932 ************************************ 00:22:22.932 14:59:05 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:22.932 14:59:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:22.932 14:59:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:22.932 14:59:05 -- common/autotest_common.sh@10 -- # set +x 00:22:22.932 ************************************ 00:22:22.932 START TEST nvmf_fio_host 00:22:22.932 ************************************ 00:22:22.932 14:59:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:22.932 * Looking for test storage... 00:22:22.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:22.932 14:59:05 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:22.932 14:59:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.932 14:59:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.932 14:59:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.932 14:59:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.932 14:59:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.932 14:59:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.932 14:59:05 -- paths/export.sh@5 -- # export PATH 00:22:22.932 14:59:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.932 14:59:05 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:22.932 14:59:05 -- nvmf/common.sh@7 -- # uname -s 00:22:22.932 14:59:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.932 14:59:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.932 14:59:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.932 14:59:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.932 14:59:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:22.932 14:59:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:22.932 14:59:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.932 14:59:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:22.932 14:59:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.932 14:59:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:22.932 14:59:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:22.932 14:59:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:22.932 14:59:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.932 14:59:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:22.932 14:59:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:22.932 14:59:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:22.932 14:59:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:22.932 14:59:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.932 14:59:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.932 14:59:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.933 14:59:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.933 14:59:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.933 14:59:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.933 14:59:05 -- paths/export.sh@5 -- # export PATH 00:22:22.933 14:59:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.933 14:59:05 -- nvmf/common.sh@47 -- # : 0 00:22:22.933 14:59:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:22.933 14:59:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:22.933 14:59:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:22.933 14:59:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.933 14:59:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.933 14:59:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:22.933 14:59:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:22.933 14:59:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:22.933 14:59:05 -- host/fio.sh@12 -- # nvmftestinit 00:22:22.933 14:59:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:22.933 14:59:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.933 14:59:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:22.933 14:59:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:22.933 14:59:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:22.933 14:59:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.933 14:59:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:22.933 14:59:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.933 14:59:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:22.933 14:59:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:22.933 14:59:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:22.933 14:59:05 -- common/autotest_common.sh@10 -- # set +x 00:22:29.528 14:59:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:29.528 14:59:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:29.528 14:59:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:29.528 14:59:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:29.528 14:59:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:29.528 14:59:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:29.528 14:59:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:29.528 14:59:12 -- nvmf/common.sh@295 -- # net_devs=() 00:22:29.528 14:59:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:29.528 14:59:12 -- nvmf/common.sh@296 -- # e810=() 00:22:29.528 14:59:12 -- nvmf/common.sh@296 -- # local -ga e810 00:22:29.528 14:59:12 -- nvmf/common.sh@297 -- # x722=() 00:22:29.528 14:59:12 -- nvmf/common.sh@297 -- # local -ga x722 00:22:29.529 14:59:12 -- nvmf/common.sh@298 -- # mlx=() 00:22:29.529 14:59:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:29.529 14:59:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:29.529 14:59:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:29.529 14:59:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:29.529 14:59:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:29.529 14:59:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:29.529 14:59:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:29.529 14:59:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:29.529 14:59:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:29.529 14:59:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:29.529 14:59:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:29.529 14:59:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:29.529 14:59:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:29.529 14:59:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:29.529 14:59:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:29.529 14:59:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:29.529 14:59:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:29.529 14:59:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:29.529 14:59:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:29.529 14:59:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:29.529 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:29.529 14:59:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:29.529 14:59:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:29.529 14:59:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.529 14:59:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.529 14:59:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:29.529 14:59:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:29.529 14:59:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:29.529 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:29.529 14:59:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:29.529 14:59:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:29.529 14:59:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.529 14:59:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.529 14:59:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:29.529 14:59:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:29.529 14:59:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:29.529 14:59:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:29.529 14:59:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:29.529 14:59:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.529 14:59:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:29.529 14:59:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.529 14:59:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:29.529 Found net devices under 0000:31:00.0: cvl_0_0 00:22:29.529 14:59:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.529 14:59:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:29.529 14:59:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.529 14:59:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:29.529 14:59:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.529 14:59:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:29.529 Found net devices under 0000:31:00.1: cvl_0_1 00:22:29.529 14:59:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.529 14:59:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:29.529 14:59:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:29.529 14:59:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:29.529 14:59:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:29.529 14:59:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:29.529 14:59:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:29.529 14:59:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:29.529 14:59:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:29.529 14:59:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:29.529 14:59:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:29.529 14:59:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:29.529 14:59:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:29.529 14:59:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:29.529 14:59:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:29.529 14:59:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:29.529 14:59:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:29.529 14:59:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:29.529 14:59:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:29.790 14:59:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:29.790 14:59:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:29.790 14:59:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:29.790 14:59:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:29.790 14:59:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:29.790 14:59:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:29.790 14:59:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:29.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:29.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:22:29.790 00:22:29.790 --- 10.0.0.2 ping statistics --- 00:22:29.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.790 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:22:29.790 14:59:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:29.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:29.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:22:29.790 00:22:29.790 --- 10.0.0.1 ping statistics --- 00:22:29.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.790 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:22:29.790 14:59:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:29.790 14:59:12 -- nvmf/common.sh@411 -- # return 0 00:22:29.790 14:59:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:29.790 14:59:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:29.790 14:59:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:29.790 14:59:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:29.790 14:59:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:29.790 14:59:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:29.790 14:59:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:30.051 14:59:12 -- host/fio.sh@14 -- # [[ y != y ]] 00:22:30.051 14:59:12 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:22:30.051 14:59:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:30.051 14:59:12 -- common/autotest_common.sh@10 -- # set +x 00:22:30.051 14:59:12 -- host/fio.sh@22 -- # nvmfpid=1157510 00:22:30.051 14:59:12 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:30.051 14:59:12 -- host/fio.sh@26 -- # waitforlisten 1157510 00:22:30.051 14:59:12 -- common/autotest_common.sh@817 -- # '[' -z 1157510 ']' 00:22:30.051 14:59:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.051 14:59:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:30.051 14:59:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.051 14:59:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:30.051 14:59:12 -- common/autotest_common.sh@10 -- # set +x 00:22:30.051 14:59:12 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:30.051 [2024-04-26 14:59:12.517501] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:30.051 [2024-04-26 14:59:12.517556] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.051 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.051 [2024-04-26 14:59:12.585108] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:30.051 [2024-04-26 14:59:12.650884] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.051 [2024-04-26 14:59:12.650921] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.051 [2024-04-26 14:59:12.650929] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.051 [2024-04-26 14:59:12.650937] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.051 [2024-04-26 14:59:12.650944] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.051 [2024-04-26 14:59:12.651114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.051 [2024-04-26 14:59:12.651237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:30.051 [2024-04-26 14:59:12.651393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.051 [2024-04-26 14:59:12.651394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:30.623 14:59:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:30.623 14:59:13 -- common/autotest_common.sh@850 -- # return 0 00:22:30.623 14:59:13 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:30.623 14:59:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.623 14:59:13 -- common/autotest_common.sh@10 -- # set +x 00:22:30.884 [2024-04-26 14:59:13.292434] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.884 14:59:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.884 14:59:13 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:22:30.884 14:59:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:30.884 14:59:13 -- common/autotest_common.sh@10 -- # set +x 00:22:30.884 14:59:13 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:30.884 14:59:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.884 14:59:13 -- common/autotest_common.sh@10 -- # set +x 00:22:30.884 Malloc1 00:22:30.884 14:59:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.884 14:59:13 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:30.884 14:59:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.884 14:59:13 -- common/autotest_common.sh@10 -- # set +x 00:22:30.884 14:59:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.884 14:59:13 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:30.884 14:59:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.884 14:59:13 -- common/autotest_common.sh@10 -- # set +x 00:22:30.884 14:59:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.884 14:59:13 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:30.884 14:59:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.884 14:59:13 -- common/autotest_common.sh@10 -- # set +x 00:22:30.884 [2024-04-26 14:59:13.392062] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.884 14:59:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.884 14:59:13 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:30.884 14:59:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.884 14:59:13 -- common/autotest_common.sh@10 -- # set +x 00:22:30.884 14:59:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.884 14:59:13 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:30.884 14:59:13 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:30.884 14:59:13 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:30.884 14:59:13 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:30.884 14:59:13 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:30.884 14:59:13 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:30.884 14:59:13 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:30.884 14:59:13 -- common/autotest_common.sh@1327 -- # shift 00:22:30.884 14:59:13 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:30.884 14:59:13 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:30.884 14:59:13 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:30.884 14:59:13 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:30.884 14:59:13 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:30.884 14:59:13 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:30.884 14:59:13 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:30.884 14:59:13 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:30.884 14:59:13 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:30.884 14:59:13 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:30.884 14:59:13 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:30.884 14:59:13 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:30.884 14:59:13 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:30.884 14:59:13 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:30.884 14:59:13 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:31.143 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:31.143 fio-3.35 00:22:31.143 Starting 1 thread 00:22:31.404 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.021 00:22:34.021 test: (groupid=0, jobs=1): err= 0: pid=1158031: Fri Apr 26 14:59:16 2024 00:22:34.021 read: IOPS=11.7k, BW=45.7MiB/s (47.9MB/s)(91.5MiB/2004msec) 00:22:34.021 slat (usec): min=2, max=273, avg= 2.19, stdev= 2.51 00:22:34.021 clat (usec): min=3757, max=8978, avg=6047.72, stdev=1185.79 00:22:34.021 lat (usec): min=3787, max=8980, avg=6049.92, stdev=1185.78 00:22:34.021 clat percentiles (usec): 00:22:34.021 | 1.00th=[ 4424], 5.00th=[ 4686], 10.00th=[ 4883], 20.00th=[ 5014], 00:22:34.021 | 30.00th=[ 5145], 40.00th=[ 5276], 50.00th=[ 5473], 60.00th=[ 5800], 00:22:34.021 | 70.00th=[ 7046], 80.00th=[ 7439], 90.00th=[ 7832], 95.00th=[ 8029], 00:22:34.021 | 99.00th=[ 8455], 99.50th=[ 8586], 99.90th=[ 8717], 99.95th=[ 8848], 00:22:34.021 | 99.99th=[ 8979] 00:22:34.021 bw ( KiB/s): min=36632, max=55496, per=99.91%, avg=46730.00, stdev=9697.36, samples=4 00:22:34.021 iops : min= 9158, max=13874, avg=11682.50, stdev=2424.34, samples=4 00:22:34.021 write: IOPS=11.6k, BW=45.3MiB/s (47.5MB/s)(90.9MiB/2004msec); 0 zone resets 00:22:34.021 slat (usec): min=2, max=194, avg= 2.27, stdev= 1.51 00:22:34.021 clat (usec): min=2753, max=8134, avg=4867.98, stdev=946.30 00:22:34.021 lat (usec): min=2771, max=8136, avg=4870.25, stdev=946.30 00:22:34.021 clat percentiles (usec): 00:22:34.021 | 1.00th=[ 3556], 5.00th=[ 3785], 10.00th=[ 3916], 20.00th=[ 4047], 00:22:34.021 | 30.00th=[ 4178], 40.00th=[ 4293], 50.00th=[ 4424], 60.00th=[ 4686], 00:22:34.021 | 70.00th=[ 5669], 80.00th=[ 5997], 90.00th=[ 6259], 95.00th=[ 6456], 00:22:34.021 | 99.00th=[ 6783], 99.50th=[ 6849], 99.90th=[ 7242], 99.95th=[ 7504], 00:22:34.021 | 99.99th=[ 8094] 00:22:34.021 bw ( KiB/s): min=37576, max=54808, per=99.95%, avg=46404.00, stdev=9246.97, samples=4 00:22:34.021 iops : min= 9394, max=13702, avg=11601.00, stdev=2311.74, samples=4 00:22:34.021 lat (msec) : 4=7.88%, 10=92.12% 00:22:34.021 cpu : usr=73.09%, sys=25.06%, ctx=48, majf=0, minf=5 00:22:34.021 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:34.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.021 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:34.021 issued rwts: total=23433,23261,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:34.021 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:34.021 00:22:34.021 Run status group 0 (all jobs): 00:22:34.021 READ: bw=45.7MiB/s (47.9MB/s), 45.7MiB/s-45.7MiB/s (47.9MB/s-47.9MB/s), io=91.5MiB (96.0MB), run=2004-2004msec 00:22:34.021 WRITE: bw=45.3MiB/s (47.5MB/s), 45.3MiB/s-45.3MiB/s (47.5MB/s-47.5MB/s), io=90.9MiB (95.3MB), run=2004-2004msec 00:22:34.021 14:59:16 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:34.021 14:59:16 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:34.021 14:59:16 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:34.021 14:59:16 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:34.021 14:59:16 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:34.021 14:59:16 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:34.021 14:59:16 -- common/autotest_common.sh@1327 -- # shift 00:22:34.021 14:59:16 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:34.021 14:59:16 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:34.021 14:59:16 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:34.021 14:59:16 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:34.021 14:59:16 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:34.021 14:59:16 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:34.021 14:59:16 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:34.021 14:59:16 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:34.021 14:59:16 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:34.021 14:59:16 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:34.021 14:59:16 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:34.021 14:59:16 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:34.021 14:59:16 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:34.021 14:59:16 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:34.021 14:59:16 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:34.021 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:34.021 fio-3.35 00:22:34.021 Starting 1 thread 00:22:34.021 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.599 00:22:36.599 test: (groupid=0, jobs=1): err= 0: pid=1158574: Fri Apr 26 14:59:18 2024 00:22:36.599 read: IOPS=9161, BW=143MiB/s (150MB/s)(287MiB/2006msec) 00:22:36.599 slat (usec): min=3, max=107, avg= 3.60, stdev= 1.54 00:22:36.599 clat (usec): min=2118, max=15311, avg=8446.08, stdev=2001.81 00:22:36.599 lat (usec): min=2121, max=15314, avg=8449.69, stdev=2001.90 00:22:36.599 clat percentiles (usec): 00:22:36.599 | 1.00th=[ 4359], 5.00th=[ 5276], 10.00th=[ 5866], 20.00th=[ 6652], 00:22:36.599 | 30.00th=[ 7308], 40.00th=[ 7832], 50.00th=[ 8455], 60.00th=[ 8979], 00:22:36.599 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[11076], 95.00th=[11600], 00:22:36.599 | 99.00th=[13042], 99.50th=[13829], 99.90th=[14615], 99.95th=[14877], 00:22:36.599 | 99.99th=[15270] 00:22:36.599 bw ( KiB/s): min=65888, max=81312, per=49.28%, avg=72240.00, stdev=6625.08, samples=4 00:22:36.599 iops : min= 4118, max= 5082, avg=4515.00, stdev=414.07, samples=4 00:22:36.599 write: IOPS=5417, BW=84.7MiB/s (88.8MB/s)(148MiB/1745msec); 0 zone resets 00:22:36.599 slat (usec): min=40, max=300, avg=41.06, stdev= 6.59 00:22:36.599 clat (usec): min=2332, max=16778, avg=9548.25, stdev=1629.02 00:22:36.599 lat (usec): min=2372, max=16818, avg=9589.31, stdev=1629.87 00:22:36.599 clat percentiles (usec): 00:22:36.599 | 1.00th=[ 6063], 5.00th=[ 7242], 10.00th=[ 7701], 20.00th=[ 8225], 00:22:36.599 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9765], 00:22:36.599 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11600], 95.00th=[12518], 00:22:36.599 | 99.00th=[13960], 99.50th=[14615], 99.90th=[16450], 99.95th=[16712], 00:22:36.599 | 99.99th=[16909] 00:22:36.599 bw ( KiB/s): min=68928, max=84320, per=86.51%, avg=74992.00, stdev=6707.52, samples=4 00:22:36.599 iops : min= 4308, max= 5270, avg=4687.00, stdev=419.22, samples=4 00:22:36.599 lat (msec) : 4=0.45%, 10=72.16%, 20=27.39% 00:22:36.599 cpu : usr=83.89%, sys=13.92%, ctx=22, majf=0, minf=14 00:22:36.599 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:22:36.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:36.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:36.599 issued rwts: total=18377,9454,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:36.599 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:36.599 00:22:36.599 Run status group 0 (all jobs): 00:22:36.599 READ: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=287MiB (301MB), run=2006-2006msec 00:22:36.599 WRITE: bw=84.7MiB/s (88.8MB/s), 84.7MiB/s-84.7MiB/s (88.8MB/s-88.8MB/s), io=148MiB (155MB), run=1745-1745msec 00:22:36.599 14:59:19 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:36.599 14:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:36.599 14:59:19 -- common/autotest_common.sh@10 -- # set +x 00:22:36.599 14:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:36.599 14:59:19 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:22:36.599 14:59:19 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:22:36.599 14:59:19 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:22:36.599 14:59:19 -- host/fio.sh@84 -- # nvmftestfini 00:22:36.599 14:59:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:36.599 14:59:19 -- nvmf/common.sh@117 -- # sync 00:22:36.599 14:59:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:36.599 14:59:19 -- nvmf/common.sh@120 -- # set +e 00:22:36.599 14:59:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:36.599 14:59:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:36.599 rmmod nvme_tcp 00:22:36.599 rmmod nvme_fabrics 00:22:36.599 rmmod nvme_keyring 00:22:36.599 14:59:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:36.599 14:59:19 -- nvmf/common.sh@124 -- # set -e 00:22:36.599 14:59:19 -- nvmf/common.sh@125 -- # return 0 00:22:36.599 14:59:19 -- nvmf/common.sh@478 -- # '[' -n 1157510 ']' 00:22:36.599 14:59:19 -- nvmf/common.sh@479 -- # killprocess 1157510 00:22:36.600 14:59:19 -- common/autotest_common.sh@936 -- # '[' -z 1157510 ']' 00:22:36.600 14:59:19 -- common/autotest_common.sh@940 -- # kill -0 1157510 00:22:36.600 14:59:19 -- common/autotest_common.sh@941 -- # uname 00:22:36.600 14:59:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:36.600 14:59:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1157510 00:22:36.600 14:59:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:36.600 14:59:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:36.600 14:59:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1157510' 00:22:36.600 killing process with pid 1157510 00:22:36.600 14:59:19 -- common/autotest_common.sh@955 -- # kill 1157510 00:22:36.600 14:59:19 -- common/autotest_common.sh@960 -- # wait 1157510 00:22:36.859 14:59:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:36.859 14:59:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:36.859 14:59:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:36.860 14:59:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:36.860 14:59:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:36.860 14:59:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.860 14:59:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:36.860 14:59:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.769 14:59:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:38.769 00:22:38.769 real 0m16.076s 00:22:38.769 user 1m1.601s 00:22:38.769 sys 0m7.061s 00:22:38.769 14:59:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:38.769 14:59:21 -- common/autotest_common.sh@10 -- # set +x 00:22:38.769 ************************************ 00:22:38.769 END TEST nvmf_fio_host 00:22:38.769 ************************************ 00:22:38.769 14:59:21 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:38.769 14:59:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:38.769 14:59:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:38.769 14:59:21 -- common/autotest_common.sh@10 -- # set +x 00:22:39.030 ************************************ 00:22:39.030 START TEST nvmf_failover 00:22:39.030 ************************************ 00:22:39.030 14:59:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:39.030 * Looking for test storage... 00:22:39.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:39.030 14:59:21 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:39.030 14:59:21 -- nvmf/common.sh@7 -- # uname -s 00:22:39.030 14:59:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:39.030 14:59:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:39.030 14:59:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:39.030 14:59:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:39.030 14:59:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:39.030 14:59:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:39.030 14:59:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:39.030 14:59:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:39.030 14:59:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:39.030 14:59:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:39.030 14:59:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:39.030 14:59:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:39.030 14:59:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:39.030 14:59:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:39.030 14:59:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:39.290 14:59:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:39.290 14:59:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:39.290 14:59:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.290 14:59:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.290 14:59:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.290 14:59:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.290 14:59:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.290 14:59:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.290 14:59:21 -- paths/export.sh@5 -- # export PATH 00:22:39.290 14:59:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.290 14:59:21 -- nvmf/common.sh@47 -- # : 0 00:22:39.290 14:59:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:39.290 14:59:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:39.290 14:59:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:39.290 14:59:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:39.290 14:59:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:39.290 14:59:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:39.290 14:59:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:39.290 14:59:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:39.290 14:59:21 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:39.290 14:59:21 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:39.290 14:59:21 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:39.290 14:59:21 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:39.290 14:59:21 -- host/failover.sh@18 -- # nvmftestinit 00:22:39.290 14:59:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:39.290 14:59:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:39.290 14:59:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:39.290 14:59:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:39.290 14:59:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:39.290 14:59:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.290 14:59:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:39.290 14:59:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.290 14:59:21 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:39.290 14:59:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:39.290 14:59:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:39.290 14:59:21 -- common/autotest_common.sh@10 -- # set +x 00:22:47.426 14:59:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:47.426 14:59:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:47.426 14:59:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:47.426 14:59:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:47.426 14:59:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:47.426 14:59:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:47.426 14:59:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:47.426 14:59:28 -- nvmf/common.sh@295 -- # net_devs=() 00:22:47.426 14:59:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:47.426 14:59:28 -- nvmf/common.sh@296 -- # e810=() 00:22:47.426 14:59:28 -- nvmf/common.sh@296 -- # local -ga e810 00:22:47.426 14:59:28 -- nvmf/common.sh@297 -- # x722=() 00:22:47.426 14:59:28 -- nvmf/common.sh@297 -- # local -ga x722 00:22:47.426 14:59:28 -- nvmf/common.sh@298 -- # mlx=() 00:22:47.426 14:59:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:47.426 14:59:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:47.426 14:59:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:47.426 14:59:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:47.426 14:59:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:47.426 14:59:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:47.426 14:59:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:47.426 14:59:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:47.426 14:59:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:47.426 14:59:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:47.426 14:59:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:47.426 14:59:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:47.426 14:59:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:47.426 14:59:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:47.426 14:59:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:47.426 14:59:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:47.426 14:59:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:47.426 14:59:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:47.426 14:59:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:47.426 14:59:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:47.426 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:47.426 14:59:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:47.426 14:59:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:47.426 14:59:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.426 14:59:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.426 14:59:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:47.426 14:59:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:47.426 14:59:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:47.426 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:47.426 14:59:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:47.426 14:59:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:47.426 14:59:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.426 14:59:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.426 14:59:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:47.426 14:59:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:47.426 14:59:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:47.426 14:59:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:47.426 14:59:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:47.426 14:59:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.426 14:59:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:47.426 14:59:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.426 14:59:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:47.426 Found net devices under 0000:31:00.0: cvl_0_0 00:22:47.426 14:59:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.426 14:59:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:47.426 14:59:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.426 14:59:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:47.426 14:59:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.426 14:59:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:47.426 Found net devices under 0000:31:00.1: cvl_0_1 00:22:47.426 14:59:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.426 14:59:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:47.426 14:59:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:47.426 14:59:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:47.426 14:59:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:47.426 14:59:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:47.426 14:59:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:47.426 14:59:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:47.426 14:59:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:47.426 14:59:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:47.426 14:59:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:47.426 14:59:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:47.426 14:59:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:47.426 14:59:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:47.426 14:59:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:47.426 14:59:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:47.427 14:59:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:47.427 14:59:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:47.427 14:59:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:47.427 14:59:28 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:47.427 14:59:28 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:47.427 14:59:28 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:47.427 14:59:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:47.427 14:59:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:47.427 14:59:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:47.427 14:59:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:47.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:47.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:22:47.427 00:22:47.427 --- 10.0.0.2 ping statistics --- 00:22:47.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.427 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:22:47.427 14:59:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:47.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:47.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:22:47.427 00:22:47.427 --- 10.0.0.1 ping statistics --- 00:22:47.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.427 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:22:47.427 14:59:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:47.427 14:59:29 -- nvmf/common.sh@411 -- # return 0 00:22:47.427 14:59:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:47.427 14:59:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:47.427 14:59:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:47.427 14:59:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:47.427 14:59:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:47.427 14:59:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:47.427 14:59:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:47.427 14:59:29 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:47.427 14:59:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:47.427 14:59:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:47.427 14:59:29 -- common/autotest_common.sh@10 -- # set +x 00:22:47.427 14:59:29 -- nvmf/common.sh@470 -- # nvmfpid=1163263 00:22:47.427 14:59:29 -- nvmf/common.sh@471 -- # waitforlisten 1163263 00:22:47.427 14:59:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:47.427 14:59:29 -- common/autotest_common.sh@817 -- # '[' -z 1163263 ']' 00:22:47.427 14:59:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.427 14:59:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:47.427 14:59:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.427 14:59:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:47.427 14:59:29 -- common/autotest_common.sh@10 -- # set +x 00:22:47.427 [2024-04-26 14:59:29.222240] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:47.427 [2024-04-26 14:59:29.222322] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.427 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.427 [2024-04-26 14:59:29.314064] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:47.427 [2024-04-26 14:59:29.406077] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.427 [2024-04-26 14:59:29.406139] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.427 [2024-04-26 14:59:29.406148] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:47.427 [2024-04-26 14:59:29.406155] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:47.427 [2024-04-26 14:59:29.406161] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.427 [2024-04-26 14:59:29.406299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.427 [2024-04-26 14:59:29.406465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.427 [2024-04-26 14:59:29.406466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:47.427 14:59:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:47.427 14:59:29 -- common/autotest_common.sh@850 -- # return 0 00:22:47.427 14:59:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:47.427 14:59:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:47.427 14:59:29 -- common/autotest_common.sh@10 -- # set +x 00:22:47.427 14:59:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:47.427 14:59:30 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:47.687 [2024-04-26 14:59:30.176280] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:47.687 14:59:30 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:47.947 Malloc0 00:22:47.948 14:59:30 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:47.948 14:59:30 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:48.208 14:59:30 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:48.208 [2024-04-26 14:59:30.854217] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.469 14:59:30 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:48.469 [2024-04-26 14:59:31.014636] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:48.469 14:59:31 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:48.730 [2024-04-26 14:59:31.179154] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:48.730 14:59:31 -- host/failover.sh@31 -- # bdevperf_pid=1163728 00:22:48.730 14:59:31 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:48.730 14:59:31 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:48.730 14:59:31 -- host/failover.sh@34 -- # waitforlisten 1163728 /var/tmp/bdevperf.sock 00:22:48.730 14:59:31 -- common/autotest_common.sh@817 -- # '[' -z 1163728 ']' 00:22:48.730 14:59:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:48.730 14:59:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:48.730 14:59:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:48.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:48.730 14:59:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:48.730 14:59:31 -- common/autotest_common.sh@10 -- # set +x 00:22:49.671 14:59:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:49.671 14:59:32 -- common/autotest_common.sh@850 -- # return 0 00:22:49.671 14:59:32 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:49.671 NVMe0n1 00:22:49.671 14:59:32 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:49.931 00:22:50.191 14:59:32 -- host/failover.sh@39 -- # run_test_pid=1163968 00:22:50.191 14:59:32 -- host/failover.sh@41 -- # sleep 1 00:22:50.191 14:59:32 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:51.137 14:59:33 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:51.137 [2024-04-26 14:59:33.768794] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768832] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768843] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768853] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768858] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768863] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768881] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768885] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768889] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768898] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768907] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768911] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768915] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768928] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768937] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768963] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768968] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768976] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768981] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768990] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.768995] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.769000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.769004] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.769008] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.769013] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.769017] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.769021] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.769025] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.769030] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.769034] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.769038] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.769042] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.769046] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.769051] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.769055] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.769060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.769064] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.769068] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.769072] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.769076] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.769081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.769085] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.769089] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.769094] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.769098] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.769103] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.137 [2024-04-26 14:59:33.769107] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769111] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769117] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769126] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769131] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769135] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769144] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769148] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769153] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769157] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769166] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769175] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769179] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769184] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769193] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769197] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769201] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769205] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769210] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769214] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769223] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769227] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769241] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769245] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769250] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769254] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769259] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769263] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769267] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769276] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769280] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769284] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769289] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769293] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769301] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769306] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 [2024-04-26 14:59:33.769310] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f9ca0 is same with the state(5) to be set 00:22:51.138 14:59:33 -- host/failover.sh@45 -- # sleep 3 00:22:54.437 14:59:36 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:54.437 00:22:54.698 14:59:37 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:54.698 [2024-04-26 14:59:37.248468] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248510] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248515] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248520] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248525] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248534] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248543] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248547] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248551] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248556] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248565] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248569] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248573] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248582] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248586] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248595] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248600] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248604] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248608] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248612] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248617] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248625] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248630] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248634] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248638] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248643] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248647] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248652] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248665] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248669] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248678] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248683] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248697] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248701] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248706] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248710] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248715] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248719] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248745] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248749] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248759] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248764] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248768] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248773] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248778] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248782] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248786] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248791] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248795] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248800] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248810] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248819] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248832] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.698 [2024-04-26 14:59:37.248840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.699 [2024-04-26 14:59:37.248845] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.699 [2024-04-26 14:59:37.248849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.699 [2024-04-26 14:59:37.248853] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.699 [2024-04-26 14:59:37.248857] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.699 [2024-04-26 14:59:37.248861] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.699 [2024-04-26 14:59:37.248866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.699 [2024-04-26 14:59:37.248870] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.699 [2024-04-26 14:59:37.248874] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.699 [2024-04-26 14:59:37.248879] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.699 [2024-04-26 14:59:37.248883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.699 [2024-04-26 14:59:37.248887] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.699 [2024-04-26 14:59:37.248892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.699 [2024-04-26 14:59:37.248897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.699 [2024-04-26 14:59:37.248902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fab50 is same with the state(5) to be set 00:22:54.699 14:59:37 -- host/failover.sh@50 -- # sleep 3 00:22:57.995 14:59:40 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:57.996 [2024-04-26 14:59:40.424730] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.996 14:59:40 -- host/failover.sh@55 -- # sleep 1 00:22:58.937 14:59:41 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:58.937 [2024-04-26 14:59:41.594172] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.937 [2024-04-26 14:59:41.594206] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594211] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594216] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594229] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594234] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594238] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594242] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594255] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594259] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594264] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594268] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594272] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594277] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594281] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594290] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594298] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594306] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594311] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594320] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594324] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594332] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594337] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594341] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594350] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594355] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594359] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:58.938 [2024-04-26 14:59:41.594368] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fb850 is same with the state(5) to be set 00:22:59.199 14:59:41 -- host/failover.sh@59 -- # wait 1163968 00:23:05.791 0 00:23:05.791 14:59:47 -- host/failover.sh@61 -- # killprocess 1163728 00:23:05.791 14:59:47 -- common/autotest_common.sh@936 -- # '[' -z 1163728 ']' 00:23:05.791 14:59:47 -- common/autotest_common.sh@940 -- # kill -0 1163728 00:23:05.791 14:59:47 -- common/autotest_common.sh@941 -- # uname 00:23:05.791 14:59:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:05.791 14:59:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1163728 00:23:05.791 14:59:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:05.791 14:59:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:05.791 14:59:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1163728' 00:23:05.791 killing process with pid 1163728 00:23:05.791 14:59:47 -- common/autotest_common.sh@955 -- # kill 1163728 00:23:05.791 14:59:47 -- common/autotest_common.sh@960 -- # wait 1163728 00:23:05.791 14:59:47 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:05.791 [2024-04-26 14:59:31.255684] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:05.791 [2024-04-26 14:59:31.255747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1163728 ] 00:23:05.791 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.791 [2024-04-26 14:59:31.315251] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.791 [2024-04-26 14:59:31.381116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.791 Running I/O for 15 seconds... 00:23:05.791 [2024-04-26 14:59:33.769909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.769942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.769959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.769967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.769977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.769986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.769995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.791 [2024-04-26 14:59:33.770463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.791 [2024-04-26 14:59:33.770472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.770939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.792 [2024-04-26 14:59:33.770957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.792 [2024-04-26 14:59:33.770973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.792 [2024-04-26 14:59:33.770988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.770998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.792 [2024-04-26 14:59:33.771005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.771014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.792 [2024-04-26 14:59:33.771021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.771030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.792 [2024-04-26 14:59:33.771036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.771045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.792 [2024-04-26 14:59:33.771052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.771062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.771069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.792 [2024-04-26 14:59:33.771078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.792 [2024-04-26 14:59:33.771085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.793 [2024-04-26 14:59:33.771101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.793 [2024-04-26 14:59:33.771118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.793 [2024-04-26 14:59:33.771134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.793 [2024-04-26 14:59:33.771150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.793 [2024-04-26 14:59:33.771168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.793 [2024-04-26 14:59:33.771184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.793 [2024-04-26 14:59:33.771200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.793 [2024-04-26 14:59:33.771216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.793 [2024-04-26 14:59:33.771232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.793 [2024-04-26 14:59:33.771248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.793 [2024-04-26 14:59:33.771264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.793 [2024-04-26 14:59:33.771280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.793 [2024-04-26 14:59:33.771296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.793 [2024-04-26 14:59:33.771312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.793 [2024-04-26 14:59:33.771327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.793 [2024-04-26 14:59:33.771343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.793 [2024-04-26 14:59:33.771361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.793 [2024-04-26 14:59:33.771376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.793 [2024-04-26 14:59:33.771392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.793 [2024-04-26 14:59:33.771409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.793 [2024-04-26 14:59:33.771424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.793 [2024-04-26 14:59:33.771441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.793 [2024-04-26 14:59:33.771456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.793 [2024-04-26 14:59:33.771473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.793 [2024-04-26 14:59:33.771488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.793 [2024-04-26 14:59:33.771504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.793 [2024-04-26 14:59:33.771521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.793 [2024-04-26 14:59:33.771536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.793 [2024-04-26 14:59:33.771552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.793 [2024-04-26 14:59:33.771570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.793 [2024-04-26 14:59:33.771585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.793 [2024-04-26 14:59:33.771601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.793 [2024-04-26 14:59:33.771617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.793 [2024-04-26 14:59:33.771633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.793 [2024-04-26 14:59:33.771649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.793 [2024-04-26 14:59:33.771664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.793 [2024-04-26 14:59:33.771680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.793 [2024-04-26 14:59:33.771696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.793 [2024-04-26 14:59:33.771712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.793 [2024-04-26 14:59:33.771721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.793 [2024-04-26 14:59:33.771728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:33.771737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.794 [2024-04-26 14:59:33.771744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:33.771753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.794 [2024-04-26 14:59:33.771760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:33.771770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.794 [2024-04-26 14:59:33.771777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:33.771786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.794 [2024-04-26 14:59:33.771793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:33.771802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.794 [2024-04-26 14:59:33.771809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:33.771818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.794 [2024-04-26 14:59:33.771826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:33.771834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.794 [2024-04-26 14:59:33.771845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:33.771854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.794 [2024-04-26 14:59:33.771861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:33.771871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.794 [2024-04-26 14:59:33.771878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:33.771887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.794 [2024-04-26 14:59:33.771894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:33.771903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.794 [2024-04-26 14:59:33.771910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:33.771919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.794 [2024-04-26 14:59:33.771926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:33.771935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.794 [2024-04-26 14:59:33.771942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:33.771951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.794 [2024-04-26 14:59:33.771958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:33.771967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.794 [2024-04-26 14:59:33.771976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:33.771985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.794 [2024-04-26 14:59:33.771992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:33.772001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.794 [2024-04-26 14:59:33.772008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:33.772028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.794 [2024-04-26 14:59:33.772035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.794 [2024-04-26 14:59:33.772042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94368 len:8 PRP1 0x0 PRP2 0x0 00:23:05.794 [2024-04-26 14:59:33.772049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:33.772085] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd5d8e0 was disconnected and freed. reset controller. 00:23:05.794 [2024-04-26 14:59:33.772095] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:05.794 [2024-04-26 14:59:33.772114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.794 [2024-04-26 14:59:33.772122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:33.772130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.794 [2024-04-26 14:59:33.772137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:33.772145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.794 [2024-04-26 14:59:33.772151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:33.772159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.794 [2024-04-26 14:59:33.772166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:33.772173] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:05.794 [2024-04-26 14:59:33.772210] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd67e40 (9): Bad file descriptor 00:23:05.794 [2024-04-26 14:59:33.775704] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:05.794 [2024-04-26 14:59:33.903503] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:05.794 [2024-04-26 14:59:37.252471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.794 [2024-04-26 14:59:37.252508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:37.252524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.794 [2024-04-26 14:59:37.252533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:37.252547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.794 [2024-04-26 14:59:37.252555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:37.252564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.794 [2024-04-26 14:59:37.252571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:37.252580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.794 [2024-04-26 14:59:37.252588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:37.252596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.794 [2024-04-26 14:59:37.252604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:37.252613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.794 [2024-04-26 14:59:37.252620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:37.252629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.794 [2024-04-26 14:59:37.252636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:37.252645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.794 [2024-04-26 14:59:37.252652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:37.252661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.794 [2024-04-26 14:59:37.252668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:37.252677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.794 [2024-04-26 14:59:37.252684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:37.252693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.794 [2024-04-26 14:59:37.252700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:37.252710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.794 [2024-04-26 14:59:37.252717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:37.252726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.794 [2024-04-26 14:59:37.252733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.794 [2024-04-26 14:59:37.252742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.795 [2024-04-26 14:59:37.252754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.252763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.795 [2024-04-26 14:59:37.252770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.252779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.795 [2024-04-26 14:59:37.252786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.252795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.795 [2024-04-26 14:59:37.252802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.252811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.795 [2024-04-26 14:59:37.252818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.252827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.795 [2024-04-26 14:59:37.252834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.252847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.795 [2024-04-26 14:59:37.252855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.252864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.795 [2024-04-26 14:59:37.252871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.252880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.795 [2024-04-26 14:59:37.252887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.252896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.795 [2024-04-26 14:59:37.252903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.252912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.795 [2024-04-26 14:59:37.252919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.252928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.795 [2024-04-26 14:59:37.252935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.252944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.795 [2024-04-26 14:59:37.252951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.252960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.795 [2024-04-26 14:59:37.252969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.252978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.795 [2024-04-26 14:59:37.252985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.252994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.795 [2024-04-26 14:59:37.253001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.253010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.795 [2024-04-26 14:59:37.253017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.253026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.795 [2024-04-26 14:59:37.253033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.253042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.795 [2024-04-26 14:59:37.253049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.253059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.795 [2024-04-26 14:59:37.253066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.253075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.795 [2024-04-26 14:59:37.253082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.253091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.795 [2024-04-26 14:59:37.253099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.253108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.795 [2024-04-26 14:59:37.253115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.253124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.795 [2024-04-26 14:59:37.253131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.253140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.795 [2024-04-26 14:59:37.253147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.253156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.795 [2024-04-26 14:59:37.253163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.253173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.795 [2024-04-26 14:59:37.253181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.253190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.795 [2024-04-26 14:59:37.253197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.253206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.795 [2024-04-26 14:59:37.253212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.253221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.795 [2024-04-26 14:59:37.253228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.253237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.795 [2024-04-26 14:59:37.253244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.253253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.795 [2024-04-26 14:59:37.253260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.795 [2024-04-26 14:59:37.253269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.796 [2024-04-26 14:59:37.253277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.796 [2024-04-26 14:59:37.253293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.796 [2024-04-26 14:59:37.253310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.796 [2024-04-26 14:59:37.253326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.796 [2024-04-26 14:59:37.253342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.796 [2024-04-26 14:59:37.253359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.796 [2024-04-26 14:59:37.253378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.796 [2024-04-26 14:59:37.253394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.796 [2024-04-26 14:59:37.253411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.796 [2024-04-26 14:59:37.253428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.796 [2024-04-26 14:59:37.253445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.796 [2024-04-26 14:59:37.253461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.796 [2024-04-26 14:59:37.253477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.796 [2024-04-26 14:59:37.253492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.796 [2024-04-26 14:59:37.253508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.796 [2024-04-26 14:59:37.253524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.796 [2024-04-26 14:59:37.253540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.796 [2024-04-26 14:59:37.253555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.796 [2024-04-26 14:59:37.253573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.796 [2024-04-26 14:59:37.253590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.796 [2024-04-26 14:59:37.253606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.796 [2024-04-26 14:59:37.253623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.796 [2024-04-26 14:59:37.253639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.796 [2024-04-26 14:59:37.253654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.796 [2024-04-26 14:59:37.253682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43296 len:8 PRP1 0x0 PRP2 0x0 00:23:05.796 [2024-04-26 14:59:37.253689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.796 [2024-04-26 14:59:37.253704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.796 [2024-04-26 14:59:37.253710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43304 len:8 PRP1 0x0 PRP2 0x0 00:23:05.796 [2024-04-26 14:59:37.253718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.796 [2024-04-26 14:59:37.253731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.796 [2024-04-26 14:59:37.253736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43312 len:8 PRP1 0x0 PRP2 0x0 00:23:05.796 [2024-04-26 14:59:37.253743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.796 [2024-04-26 14:59:37.253756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.796 [2024-04-26 14:59:37.253762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43320 len:8 PRP1 0x0 PRP2 0x0 00:23:05.796 [2024-04-26 14:59:37.253769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.796 [2024-04-26 14:59:37.253782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.796 [2024-04-26 14:59:37.253788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43328 len:8 PRP1 0x0 PRP2 0x0 00:23:05.796 [2024-04-26 14:59:37.253795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.796 [2024-04-26 14:59:37.253809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.796 [2024-04-26 14:59:37.253815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43336 len:8 PRP1 0x0 PRP2 0x0 00:23:05.796 [2024-04-26 14:59:37.253822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.796 [2024-04-26 14:59:37.253830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.796 [2024-04-26 14:59:37.253835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.796 [2024-04-26 14:59:37.253844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43344 len:8 PRP1 0x0 PRP2 0x0 00:23:05.796 [2024-04-26 14:59:37.253851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-04-26 14:59:37.253858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.797 [2024-04-26 14:59:37.253864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.797 [2024-04-26 14:59:37.253870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43352 len:8 PRP1 0x0 PRP2 0x0 00:23:05.797 [2024-04-26 14:59:37.253877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-04-26 14:59:37.253884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.797 [2024-04-26 14:59:37.253889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.797 [2024-04-26 14:59:37.253895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43360 len:8 PRP1 0x0 PRP2 0x0 00:23:05.797 [2024-04-26 14:59:37.253902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-04-26 14:59:37.253910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.797 [2024-04-26 14:59:37.253916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.797 [2024-04-26 14:59:37.253923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43368 len:8 PRP1 0x0 PRP2 0x0 00:23:05.797 [2024-04-26 14:59:37.253929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-04-26 14:59:37.253937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.797 [2024-04-26 14:59:37.253942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.797 [2024-04-26 14:59:37.253948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43376 len:8 PRP1 0x0 PRP2 0x0 00:23:05.797 [2024-04-26 14:59:37.253955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-04-26 14:59:37.253962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.797 [2024-04-26 14:59:37.253968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.797 [2024-04-26 14:59:37.253974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43384 len:8 PRP1 0x0 PRP2 0x0 00:23:05.797 [2024-04-26 14:59:37.253981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-04-26 14:59:37.253988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.797 [2024-04-26 14:59:37.253994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.797 [2024-04-26 14:59:37.254000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43392 len:8 PRP1 0x0 PRP2 0x0 00:23:05.797 [2024-04-26 14:59:37.254008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-04-26 14:59:37.254015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.797 [2024-04-26 14:59:37.254022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.797 [2024-04-26 14:59:37.254028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43400 len:8 PRP1 0x0 PRP2 0x0 00:23:05.797 [2024-04-26 14:59:37.254035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-04-26 14:59:37.254042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.797 [2024-04-26 14:59:37.254048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.797 [2024-04-26 14:59:37.254054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43408 len:8 PRP1 0x0 PRP2 0x0 00:23:05.797 [2024-04-26 14:59:37.254060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-04-26 14:59:37.254068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.797 [2024-04-26 14:59:37.254074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.797 [2024-04-26 14:59:37.254079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43416 len:8 PRP1 0x0 PRP2 0x0 00:23:05.797 [2024-04-26 14:59:37.254086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-04-26 14:59:37.254094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.797 [2024-04-26 14:59:37.254099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.797 [2024-04-26 14:59:37.254105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43424 len:8 PRP1 0x0 PRP2 0x0 00:23:05.797 [2024-04-26 14:59:37.254112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-04-26 14:59:37.254119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.797 [2024-04-26 14:59:37.254125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.797 [2024-04-26 14:59:37.254131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43432 len:8 PRP1 0x0 PRP2 0x0 00:23:05.797 [2024-04-26 14:59:37.254138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-04-26 14:59:37.254145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.797 [2024-04-26 14:59:37.254150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.797 [2024-04-26 14:59:37.254156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43440 len:8 PRP1 0x0 PRP2 0x0 00:23:05.797 [2024-04-26 14:59:37.254163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-04-26 14:59:37.254171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.797 [2024-04-26 14:59:37.254176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.797 [2024-04-26 14:59:37.254182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43448 len:8 PRP1 0x0 PRP2 0x0 00:23:05.797 [2024-04-26 14:59:37.254189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-04-26 14:59:37.254196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.797 [2024-04-26 14:59:37.254201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.797 [2024-04-26 14:59:37.254208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43456 len:8 PRP1 0x0 PRP2 0x0 00:23:05.797 [2024-04-26 14:59:37.254216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-04-26 14:59:37.254223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.797 [2024-04-26 14:59:37.254229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.797 [2024-04-26 14:59:37.254237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43464 len:8 PRP1 0x0 PRP2 0x0 00:23:05.797 [2024-04-26 14:59:37.254244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-04-26 14:59:37.254252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.797 [2024-04-26 14:59:37.254257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.797 [2024-04-26 14:59:37.254263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43472 len:8 PRP1 0x0 PRP2 0x0 00:23:05.797 [2024-04-26 14:59:37.254271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-04-26 14:59:37.254278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.797 [2024-04-26 14:59:37.254284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.797 [2024-04-26 14:59:37.254290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43480 len:8 PRP1 0x0 PRP2 0x0 00:23:05.797 [2024-04-26 14:59:37.254296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-04-26 14:59:37.254304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.797 [2024-04-26 14:59:37.254309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.797 [2024-04-26 14:59:37.254316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43488 len:8 PRP1 0x0 PRP2 0x0 00:23:05.797 [2024-04-26 14:59:37.254323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-04-26 14:59:37.254330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.797 [2024-04-26 14:59:37.254336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.797 [2024-04-26 14:59:37.254342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43496 len:8 PRP1 0x0 PRP2 0x0 00:23:05.797 [2024-04-26 14:59:37.254349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-04-26 14:59:37.254356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.797 [2024-04-26 14:59:37.254361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.797 [2024-04-26 14:59:37.254368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43504 len:8 PRP1 0x0 PRP2 0x0 00:23:05.797 [2024-04-26 14:59:37.254375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-04-26 14:59:37.254382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.797 [2024-04-26 14:59:37.254388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.797 [2024-04-26 14:59:37.254394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43512 len:8 PRP1 0x0 PRP2 0x0 00:23:05.797 [2024-04-26 14:59:37.254400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-04-26 14:59:37.254410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.797 [2024-04-26 14:59:37.254416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.797 [2024-04-26 14:59:37.254422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43520 len:8 PRP1 0x0 PRP2 0x0 00:23:05.797 [2024-04-26 14:59:37.254428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.797 [2024-04-26 14:59:37.254436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.797 [2024-04-26 14:59:37.254441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.797 [2024-04-26 14:59:37.254447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43528 len:8 PRP1 0x0 PRP2 0x0 00:23:05.797 [2024-04-26 14:59:37.254454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-04-26 14:59:37.254463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.798 [2024-04-26 14:59:37.254468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.798 [2024-04-26 14:59:37.254475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43536 len:8 PRP1 0x0 PRP2 0x0 00:23:05.798 [2024-04-26 14:59:37.254481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-04-26 14:59:37.254489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.798 [2024-04-26 14:59:37.254494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.798 [2024-04-26 14:59:37.254500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43544 len:8 PRP1 0x0 PRP2 0x0 00:23:05.798 [2024-04-26 14:59:37.254507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-04-26 14:59:37.254515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.798 [2024-04-26 14:59:37.254520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.798 [2024-04-26 14:59:37.254526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43552 len:8 PRP1 0x0 PRP2 0x0 00:23:05.798 [2024-04-26 14:59:37.254533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-04-26 14:59:37.254541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.798 [2024-04-26 14:59:37.254546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.798 [2024-04-26 14:59:37.254552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43560 len:8 PRP1 0x0 PRP2 0x0 00:23:05.798 [2024-04-26 14:59:37.254560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-04-26 14:59:37.254567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.798 [2024-04-26 14:59:37.254572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.798 [2024-04-26 14:59:37.254578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43568 len:8 PRP1 0x0 PRP2 0x0 00:23:05.798 [2024-04-26 14:59:37.254585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-04-26 14:59:37.254593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.798 [2024-04-26 14:59:37.254598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.798 [2024-04-26 14:59:37.254604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43576 len:8 PRP1 0x0 PRP2 0x0 00:23:05.798 [2024-04-26 14:59:37.254612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-04-26 14:59:37.254620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.798 [2024-04-26 14:59:37.254625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.798 [2024-04-26 14:59:37.254631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43584 len:8 PRP1 0x0 PRP2 0x0 00:23:05.798 [2024-04-26 14:59:37.254638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-04-26 14:59:37.254646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.798 [2024-04-26 14:59:37.254652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.798 [2024-04-26 14:59:37.254658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43592 len:8 PRP1 0x0 PRP2 0x0 00:23:05.798 [2024-04-26 14:59:37.254665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-04-26 14:59:37.254673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.798 [2024-04-26 14:59:37.254678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.798 [2024-04-26 14:59:37.254684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43600 len:8 PRP1 0x0 PRP2 0x0 00:23:05.798 [2024-04-26 14:59:37.254691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-04-26 14:59:37.254699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.798 [2024-04-26 14:59:37.254704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.798 [2024-04-26 14:59:37.254710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43608 len:8 PRP1 0x0 PRP2 0x0 00:23:05.798 [2024-04-26 14:59:37.254717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-04-26 14:59:37.254725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.798 [2024-04-26 14:59:37.254730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.798 [2024-04-26 14:59:37.254736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43616 len:8 PRP1 0x0 PRP2 0x0 00:23:05.798 [2024-04-26 14:59:37.254743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-04-26 14:59:37.254751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.798 [2024-04-26 14:59:37.254756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.798 [2024-04-26 14:59:37.264586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43624 len:8 PRP1 0x0 PRP2 0x0 00:23:05.798 [2024-04-26 14:59:37.264615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-04-26 14:59:37.264629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.798 [2024-04-26 14:59:37.264636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.798 [2024-04-26 14:59:37.264642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43632 len:8 PRP1 0x0 PRP2 0x0 00:23:05.798 [2024-04-26 14:59:37.264650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-04-26 14:59:37.264657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.798 [2024-04-26 14:59:37.264663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.798 [2024-04-26 14:59:37.264673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43640 len:8 PRP1 0x0 PRP2 0x0 00:23:05.798 [2024-04-26 14:59:37.264681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-04-26 14:59:37.264688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.798 [2024-04-26 14:59:37.264694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.798 [2024-04-26 14:59:37.264700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43648 len:8 PRP1 0x0 PRP2 0x0 00:23:05.798 [2024-04-26 14:59:37.264707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-04-26 14:59:37.264715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.798 [2024-04-26 14:59:37.264721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.798 [2024-04-26 14:59:37.264727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43656 len:8 PRP1 0x0 PRP2 0x0 00:23:05.798 [2024-04-26 14:59:37.264734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-04-26 14:59:37.264742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.798 [2024-04-26 14:59:37.264748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.798 [2024-04-26 14:59:37.264754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43664 len:8 PRP1 0x0 PRP2 0x0 00:23:05.798 [2024-04-26 14:59:37.264761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-04-26 14:59:37.264768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.798 [2024-04-26 14:59:37.264774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.798 [2024-04-26 14:59:37.264780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43672 len:8 PRP1 0x0 PRP2 0x0 00:23:05.798 [2024-04-26 14:59:37.264787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-04-26 14:59:37.264794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.798 [2024-04-26 14:59:37.264800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.798 [2024-04-26 14:59:37.264806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43680 len:8 PRP1 0x0 PRP2 0x0 00:23:05.798 [2024-04-26 14:59:37.264813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-04-26 14:59:37.264820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.798 [2024-04-26 14:59:37.264825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.798 [2024-04-26 14:59:37.264832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43688 len:8 PRP1 0x0 PRP2 0x0 00:23:05.798 [2024-04-26 14:59:37.264854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-04-26 14:59:37.264862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.798 [2024-04-26 14:59:37.264867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.798 [2024-04-26 14:59:37.264873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43696 len:8 PRP1 0x0 PRP2 0x0 00:23:05.798 [2024-04-26 14:59:37.264880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.798 [2024-04-26 14:59:37.264888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.798 [2024-04-26 14:59:37.264894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.798 [2024-04-26 14:59:37.264900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43704 len:8 PRP1 0x0 PRP2 0x0 00:23:05.798 [2024-04-26 14:59:37.264907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:37.264915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.799 [2024-04-26 14:59:37.264920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.799 [2024-04-26 14:59:37.264926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43712 len:8 PRP1 0x0 PRP2 0x0 00:23:05.799 [2024-04-26 14:59:37.264933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:37.264940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.799 [2024-04-26 14:59:37.264945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.799 [2024-04-26 14:59:37.264951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43720 len:8 PRP1 0x0 PRP2 0x0 00:23:05.799 [2024-04-26 14:59:37.264958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:37.264965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.799 [2024-04-26 14:59:37.264971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.799 [2024-04-26 14:59:37.264976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43728 len:8 PRP1 0x0 PRP2 0x0 00:23:05.799 [2024-04-26 14:59:37.264984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:37.264991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.799 [2024-04-26 14:59:37.264996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.799 [2024-04-26 14:59:37.265002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43736 len:8 PRP1 0x0 PRP2 0x0 00:23:05.799 [2024-04-26 14:59:37.265008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:37.265016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.799 [2024-04-26 14:59:37.265021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.799 [2024-04-26 14:59:37.265027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43744 len:8 PRP1 0x0 PRP2 0x0 00:23:05.799 [2024-04-26 14:59:37.265034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:37.265041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.799 [2024-04-26 14:59:37.265047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.799 [2024-04-26 14:59:37.265053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43752 len:8 PRP1 0x0 PRP2 0x0 00:23:05.799 [2024-04-26 14:59:37.265060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:37.265096] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd742f0 was disconnected and freed. reset controller. 00:23:05.799 [2024-04-26 14:59:37.265105] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:05.799 [2024-04-26 14:59:37.265132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-04-26 14:59:37.265143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:37.265153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-04-26 14:59:37.265160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:37.265167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-04-26 14:59:37.265174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:37.265182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-04-26 14:59:37.265189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:37.265197] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:05.799 [2024-04-26 14:59:37.265235] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd67e40 (9): Bad file descriptor 00:23:05.799 [2024-04-26 14:59:37.268809] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:05.799 [2024-04-26 14:59:37.306057] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:05.799 [2024-04-26 14:59:41.595036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-04-26 14:59:41.595074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:41.595084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-04-26 14:59:41.595092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:41.595100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-04-26 14:59:41.595108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:41.595116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.799 [2024-04-26 14:59:41.595123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:41.595131] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd67e40 is same with the state(5) to be set 00:23:05.799 [2024-04-26 14:59:41.595189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.799 [2024-04-26 14:59:41.595198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:41.595213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.799 [2024-04-26 14:59:41.595221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:41.595230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.799 [2024-04-26 14:59:41.595237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:41.595246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.799 [2024-04-26 14:59:41.595257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:41.595267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.799 [2024-04-26 14:59:41.595274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:41.595283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.799 [2024-04-26 14:59:41.595290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:41.595299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.799 [2024-04-26 14:59:41.595306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:41.595315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.799 [2024-04-26 14:59:41.595322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:41.595331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.799 [2024-04-26 14:59:41.595338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:41.595347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.799 [2024-04-26 14:59:41.595355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:41.595364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.799 [2024-04-26 14:59:41.595371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:41.595380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.799 [2024-04-26 14:59:41.595387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:41.595396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.799 [2024-04-26 14:59:41.595403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:41.595412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.799 [2024-04-26 14:59:41.595419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:41.595428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.799 [2024-04-26 14:59:41.595435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:41.595444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.799 [2024-04-26 14:59:41.595451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.799 [2024-04-26 14:59:41.595462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.799 [2024-04-26 14:59:41.595470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.595990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.595997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.800 [2024-04-26 14:59:41.596006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.800 [2024-04-26 14:59:41.596012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-04-26 14:59:41.596500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-04-26 14:59:41.596516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-04-26 14:59:41.596532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-04-26 14:59:41.596549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-04-26 14:59:41.596565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-04-26 14:59:41.596581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.801 [2024-04-26 14:59:41.596597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.801 [2024-04-26 14:59:41.596644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.801 [2024-04-26 14:59:41.596653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.802 [2024-04-26 14:59:41.596661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.596670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.802 [2024-04-26 14:59:41.596676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.596685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.802 [2024-04-26 14:59:41.596694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.596703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.802 [2024-04-26 14:59:41.596711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.596720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.802 [2024-04-26 14:59:41.596727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.596736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.802 [2024-04-26 14:59:41.596744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.596753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.802 [2024-04-26 14:59:41.596761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.596770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.802 [2024-04-26 14:59:41.596777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.596786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.802 [2024-04-26 14:59:41.596793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.596803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.802 [2024-04-26 14:59:41.596810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.596819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.802 [2024-04-26 14:59:41.596826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.596840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.802 [2024-04-26 14:59:41.596848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.596857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.802 [2024-04-26 14:59:41.596864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.596873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.802 [2024-04-26 14:59:41.596881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.596890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.802 [2024-04-26 14:59:41.596898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.596907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.802 [2024-04-26 14:59:41.596915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.596924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.802 [2024-04-26 14:59:41.596931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.596940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.802 [2024-04-26 14:59:41.596947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.596956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.802 [2024-04-26 14:59:41.596963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.596972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.802 [2024-04-26 14:59:41.596979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.596988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.802 [2024-04-26 14:59:41.596995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.597004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.802 [2024-04-26 14:59:41.597011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.597020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.802 [2024-04-26 14:59:41.597028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.597036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-04-26 14:59:41.597043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.597052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-04-26 14:59:41.597059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.597069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-04-26 14:59:41.597076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.597085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-04-26 14:59:41.597092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.597101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-04-26 14:59:41.597107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.597118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-04-26 14:59:41.597125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.597134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-04-26 14:59:41.597141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.597150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-04-26 14:59:41.597157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.597166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-04-26 14:59:41.597173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.597182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-04-26 14:59:41.597189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.597198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-04-26 14:59:41.597205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.597215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-04-26 14:59:41.597224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.597233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-04-26 14:59:41.597240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.597249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-04-26 14:59:41.597256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.597265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.802 [2024-04-26 14:59:41.597272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.802 [2024-04-26 14:59:41.597291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.802 [2024-04-26 14:59:41.597297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.802 [2024-04-26 14:59:41.597304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44816 len:8 PRP1 0x0 PRP2 0x0 00:23:05.803 [2024-04-26 14:59:41.597311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.803 [2024-04-26 14:59:41.597347] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd742f0 was disconnected and freed. reset controller. 00:23:05.803 [2024-04-26 14:59:41.597357] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:05.803 [2024-04-26 14:59:41.597367] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:05.803 [2024-04-26 14:59:41.600840] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:05.803 [2024-04-26 14:59:41.600864] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd67e40 (9): Bad file descriptor 00:23:05.803 [2024-04-26 14:59:41.680879] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:05.803 00:23:05.803 Latency(us) 00:23:05.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.803 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:05.803 Verification LBA range: start 0x0 length 0x4000 00:23:05.803 NVMe0n1 : 15.01 11103.23 43.37 570.68 0.00 10937.21 768.00 19005.44 00:23:05.803 =================================================================================================================== 00:23:05.803 Total : 11103.23 43.37 570.68 0.00 10937.21 768.00 19005.44 00:23:05.803 Received shutdown signal, test time was about 15.000000 seconds 00:23:05.803 00:23:05.803 Latency(us) 00:23:05.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.803 =================================================================================================================== 00:23:05.803 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:05.803 14:59:47 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:05.803 14:59:47 -- host/failover.sh@65 -- # count=3 00:23:05.803 14:59:47 -- host/failover.sh@67 -- # (( count != 3 )) 00:23:05.803 14:59:47 -- host/failover.sh@73 -- # bdevperf_pid=1166981 00:23:05.803 14:59:47 -- host/failover.sh@75 -- # waitforlisten 1166981 /var/tmp/bdevperf.sock 00:23:05.803 14:59:47 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:05.803 14:59:47 -- common/autotest_common.sh@817 -- # '[' -z 1166981 ']' 00:23:05.803 14:59:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:05.803 14:59:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:05.803 14:59:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:05.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:05.803 14:59:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:05.803 14:59:47 -- common/autotest_common.sh@10 -- # set +x 00:23:06.374 14:59:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:06.374 14:59:48 -- common/autotest_common.sh@850 -- # return 0 00:23:06.374 14:59:48 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:06.374 [2024-04-26 14:59:48.927764] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:06.374 14:59:48 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:06.634 [2024-04-26 14:59:49.096197] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:06.634 14:59:49 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:06.893 NVMe0n1 00:23:06.893 14:59:49 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:07.153 00:23:07.153 14:59:49 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:07.722 00:23:07.723 14:59:50 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:07.723 14:59:50 -- host/failover.sh@82 -- # grep -q NVMe0 00:23:07.723 14:59:50 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:07.982 14:59:50 -- host/failover.sh@87 -- # sleep 3 00:23:11.272 14:59:53 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:11.272 14:59:53 -- host/failover.sh@88 -- # grep -q NVMe0 00:23:11.272 14:59:53 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:11.272 14:59:53 -- host/failover.sh@90 -- # run_test_pid=1168038 00:23:11.272 14:59:53 -- host/failover.sh@92 -- # wait 1168038 00:23:12.211 0 00:23:12.211 14:59:54 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:12.211 [2024-04-26 14:59:48.009790] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:12.211 [2024-04-26 14:59:48.009854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166981 ] 00:23:12.211 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.211 [2024-04-26 14:59:48.069803] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.211 [2024-04-26 14:59:48.131299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.211 [2024-04-26 14:59:50.491753] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:12.212 [2024-04-26 14:59:50.491797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.212 [2024-04-26 14:59:50.491807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.212 [2024-04-26 14:59:50.491818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.212 [2024-04-26 14:59:50.491826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.212 [2024-04-26 14:59:50.491833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.212 [2024-04-26 14:59:50.491846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.212 [2024-04-26 14:59:50.491854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.212 [2024-04-26 14:59:50.491861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.212 [2024-04-26 14:59:50.491869] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.212 [2024-04-26 14:59:50.491896] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.212 [2024-04-26 14:59:50.491910] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1150e40 (9): Bad file descriptor 00:23:12.212 [2024-04-26 14:59:50.513158] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:12.212 Running I/O for 1 seconds... 00:23:12.212 00:23:12.212 Latency(us) 00:23:12.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.212 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:12.212 Verification LBA range: start 0x0 length 0x4000 00:23:12.212 NVMe0n1 : 1.01 11780.19 46.02 0.00 0.00 10813.11 2280.11 10267.31 00:23:12.212 =================================================================================================================== 00:23:12.212 Total : 11780.19 46.02 0.00 0.00 10813.11 2280.11 10267.31 00:23:12.212 14:59:54 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:12.212 14:59:54 -- host/failover.sh@95 -- # grep -q NVMe0 00:23:12.472 14:59:54 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:12.733 14:59:55 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:12.733 14:59:55 -- host/failover.sh@99 -- # grep -q NVMe0 00:23:12.733 14:59:55 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:13.026 14:59:55 -- host/failover.sh@101 -- # sleep 3 00:23:16.443 14:59:58 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:16.443 14:59:58 -- host/failover.sh@103 -- # grep -q NVMe0 00:23:16.443 14:59:58 -- host/failover.sh@108 -- # killprocess 1166981 00:23:16.443 14:59:58 -- common/autotest_common.sh@936 -- # '[' -z 1166981 ']' 00:23:16.443 14:59:58 -- common/autotest_common.sh@940 -- # kill -0 1166981 00:23:16.443 14:59:58 -- common/autotest_common.sh@941 -- # uname 00:23:16.443 14:59:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:16.443 14:59:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1166981 00:23:16.443 14:59:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:16.443 14:59:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:16.443 14:59:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1166981' 00:23:16.443 killing process with pid 1166981 00:23:16.443 14:59:58 -- common/autotest_common.sh@955 -- # kill 1166981 00:23:16.443 14:59:58 -- common/autotest_common.sh@960 -- # wait 1166981 00:23:16.443 14:59:58 -- host/failover.sh@110 -- # sync 00:23:16.443 14:59:58 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:16.443 14:59:59 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:16.443 14:59:59 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:16.443 14:59:59 -- host/failover.sh@116 -- # nvmftestfini 00:23:16.443 14:59:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:16.443 14:59:59 -- nvmf/common.sh@117 -- # sync 00:23:16.443 14:59:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:16.443 14:59:59 -- nvmf/common.sh@120 -- # set +e 00:23:16.443 14:59:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:16.443 14:59:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:16.443 rmmod nvme_tcp 00:23:16.443 rmmod nvme_fabrics 00:23:16.443 rmmod nvme_keyring 00:23:16.443 14:59:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:16.443 14:59:59 -- nvmf/common.sh@124 -- # set -e 00:23:16.443 14:59:59 -- nvmf/common.sh@125 -- # return 0 00:23:16.443 14:59:59 -- nvmf/common.sh@478 -- # '[' -n 1163263 ']' 00:23:16.443 14:59:59 -- nvmf/common.sh@479 -- # killprocess 1163263 00:23:16.443 14:59:59 -- common/autotest_common.sh@936 -- # '[' -z 1163263 ']' 00:23:16.443 14:59:59 -- common/autotest_common.sh@940 -- # kill -0 1163263 00:23:16.443 14:59:59 -- common/autotest_common.sh@941 -- # uname 00:23:16.443 14:59:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:16.443 14:59:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1163263 00:23:16.705 14:59:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:16.705 14:59:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:16.705 14:59:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1163263' 00:23:16.705 killing process with pid 1163263 00:23:16.705 14:59:59 -- common/autotest_common.sh@955 -- # kill 1163263 00:23:16.705 14:59:59 -- common/autotest_common.sh@960 -- # wait 1163263 00:23:16.705 14:59:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:16.705 14:59:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:16.705 14:59:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:16.705 14:59:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:16.705 14:59:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:16.705 14:59:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.705 14:59:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:16.705 14:59:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.256 15:00:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:19.256 00:23:19.256 real 0m39.755s 00:23:19.256 user 2m2.237s 00:23:19.256 sys 0m8.143s 00:23:19.256 15:00:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:19.256 15:00:01 -- common/autotest_common.sh@10 -- # set +x 00:23:19.256 ************************************ 00:23:19.256 END TEST nvmf_failover 00:23:19.256 ************************************ 00:23:19.256 15:00:01 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:19.256 15:00:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:19.256 15:00:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:19.256 15:00:01 -- common/autotest_common.sh@10 -- # set +x 00:23:19.256 ************************************ 00:23:19.256 START TEST nvmf_discovery 00:23:19.256 ************************************ 00:23:19.256 15:00:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:19.256 * Looking for test storage... 00:23:19.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:19.256 15:00:01 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:19.256 15:00:01 -- nvmf/common.sh@7 -- # uname -s 00:23:19.256 15:00:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.256 15:00:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.256 15:00:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:19.256 15:00:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:19.256 15:00:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:19.256 15:00:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:19.256 15:00:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:19.256 15:00:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:19.256 15:00:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:19.256 15:00:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:19.256 15:00:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:19.256 15:00:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:19.256 15:00:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:19.256 15:00:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:19.256 15:00:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:19.256 15:00:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:19.256 15:00:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:19.256 15:00:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.256 15:00:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.256 15:00:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.256 15:00:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.256 15:00:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.256 15:00:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.256 15:00:01 -- paths/export.sh@5 -- # export PATH 00:23:19.256 15:00:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.256 15:00:01 -- nvmf/common.sh@47 -- # : 0 00:23:19.256 15:00:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:19.256 15:00:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:19.256 15:00:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:19.256 15:00:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:19.256 15:00:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:19.256 15:00:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:19.256 15:00:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:19.256 15:00:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:19.256 15:00:01 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:19.256 15:00:01 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:19.256 15:00:01 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:19.256 15:00:01 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:19.256 15:00:01 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:19.256 15:00:01 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:19.256 15:00:01 -- host/discovery.sh@25 -- # nvmftestinit 00:23:19.256 15:00:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:19.256 15:00:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.256 15:00:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:19.256 15:00:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:19.256 15:00:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:19.256 15:00:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.256 15:00:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:19.256 15:00:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.256 15:00:01 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:19.256 15:00:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:19.256 15:00:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:19.256 15:00:01 -- common/autotest_common.sh@10 -- # set +x 00:23:25.840 15:00:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:25.841 15:00:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:25.841 15:00:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:25.841 15:00:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:25.841 15:00:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:25.841 15:00:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:25.841 15:00:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:25.841 15:00:08 -- nvmf/common.sh@295 -- # net_devs=() 00:23:25.841 15:00:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:25.841 15:00:08 -- nvmf/common.sh@296 -- # e810=() 00:23:25.841 15:00:08 -- nvmf/common.sh@296 -- # local -ga e810 00:23:25.841 15:00:08 -- nvmf/common.sh@297 -- # x722=() 00:23:25.841 15:00:08 -- nvmf/common.sh@297 -- # local -ga x722 00:23:25.841 15:00:08 -- nvmf/common.sh@298 -- # mlx=() 00:23:25.841 15:00:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:25.841 15:00:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.841 15:00:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.841 15:00:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.841 15:00:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.841 15:00:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.841 15:00:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.841 15:00:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.841 15:00:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.841 15:00:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.841 15:00:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.841 15:00:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.841 15:00:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:25.841 15:00:08 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:25.841 15:00:08 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:25.841 15:00:08 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:25.841 15:00:08 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:25.841 15:00:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:25.841 15:00:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.841 15:00:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:25.841 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:25.841 15:00:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:25.841 15:00:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:25.841 15:00:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.841 15:00:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.841 15:00:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:25.841 15:00:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.841 15:00:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:25.841 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:26.101 15:00:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:26.101 15:00:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:26.101 15:00:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.101 15:00:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.101 15:00:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:26.101 15:00:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:26.101 15:00:08 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:26.101 15:00:08 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:26.101 15:00:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:26.101 15:00:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.101 15:00:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:26.101 15:00:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.101 15:00:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:26.101 Found net devices under 0000:31:00.0: cvl_0_0 00:23:26.101 15:00:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.101 15:00:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:26.101 15:00:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.101 15:00:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:26.101 15:00:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.101 15:00:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:26.101 Found net devices under 0000:31:00.1: cvl_0_1 00:23:26.101 15:00:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.101 15:00:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:26.101 15:00:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:26.101 15:00:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:26.101 15:00:08 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:26.101 15:00:08 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:26.101 15:00:08 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:26.101 15:00:08 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:26.101 15:00:08 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:26.101 15:00:08 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:26.101 15:00:08 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:26.101 15:00:08 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:26.101 15:00:08 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:26.101 15:00:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:26.101 15:00:08 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:26.101 15:00:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:26.101 15:00:08 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:26.101 15:00:08 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:26.101 15:00:08 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:26.101 15:00:08 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:26.101 15:00:08 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:26.101 15:00:08 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:26.101 15:00:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:26.362 15:00:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:26.362 15:00:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:26.362 15:00:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:26.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:26.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:23:26.362 00:23:26.362 --- 10.0.0.2 ping statistics --- 00:23:26.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.362 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:23:26.362 15:00:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:26.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:26.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:23:26.362 00:23:26.362 --- 10.0.0.1 ping statistics --- 00:23:26.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.362 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:23:26.362 15:00:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:26.362 15:00:08 -- nvmf/common.sh@411 -- # return 0 00:23:26.362 15:00:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:26.362 15:00:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:26.363 15:00:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:26.363 15:00:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:26.363 15:00:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:26.363 15:00:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:26.363 15:00:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:26.363 15:00:08 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:26.363 15:00:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:26.363 15:00:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:26.363 15:00:08 -- common/autotest_common.sh@10 -- # set +x 00:23:26.363 15:00:08 -- nvmf/common.sh@470 -- # nvmfpid=1173957 00:23:26.363 15:00:08 -- nvmf/common.sh@471 -- # waitforlisten 1173957 00:23:26.363 15:00:08 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:26.363 15:00:08 -- common/autotest_common.sh@817 -- # '[' -z 1173957 ']' 00:23:26.363 15:00:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.363 15:00:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:26.363 15:00:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.363 15:00:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:26.363 15:00:08 -- common/autotest_common.sh@10 -- # set +x 00:23:26.363 [2024-04-26 15:00:08.915192] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:26.363 [2024-04-26 15:00:08.915260] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.363 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.363 [2024-04-26 15:00:09.003931] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.624 [2024-04-26 15:00:09.094346] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.624 [2024-04-26 15:00:09.094402] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.624 [2024-04-26 15:00:09.094410] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.624 [2024-04-26 15:00:09.094417] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.624 [2024-04-26 15:00:09.094423] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.624 [2024-04-26 15:00:09.094458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.197 15:00:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:27.197 15:00:09 -- common/autotest_common.sh@850 -- # return 0 00:23:27.197 15:00:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:27.197 15:00:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:27.197 15:00:09 -- common/autotest_common.sh@10 -- # set +x 00:23:27.197 15:00:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.197 15:00:09 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:27.197 15:00:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.197 15:00:09 -- common/autotest_common.sh@10 -- # set +x 00:23:27.197 [2024-04-26 15:00:09.769408] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.197 15:00:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.197 15:00:09 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:27.197 15:00:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.197 15:00:09 -- common/autotest_common.sh@10 -- # set +x 00:23:27.197 [2024-04-26 15:00:09.781645] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:27.197 15:00:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.197 15:00:09 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:27.197 15:00:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.197 15:00:09 -- common/autotest_common.sh@10 -- # set +x 00:23:27.197 null0 00:23:27.197 15:00:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.197 15:00:09 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:27.197 15:00:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.197 15:00:09 -- common/autotest_common.sh@10 -- # set +x 00:23:27.197 null1 00:23:27.197 15:00:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.197 15:00:09 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:27.197 15:00:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.197 15:00:09 -- common/autotest_common.sh@10 -- # set +x 00:23:27.197 15:00:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.197 15:00:09 -- host/discovery.sh@45 -- # hostpid=1174152 00:23:27.197 15:00:09 -- host/discovery.sh@46 -- # waitforlisten 1174152 /tmp/host.sock 00:23:27.197 15:00:09 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:27.197 15:00:09 -- common/autotest_common.sh@817 -- # '[' -z 1174152 ']' 00:23:27.197 15:00:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:23:27.197 15:00:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:27.197 15:00:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:27.197 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:27.197 15:00:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:27.197 15:00:09 -- common/autotest_common.sh@10 -- # set +x 00:23:27.458 [2024-04-26 15:00:09.876603] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:27.458 [2024-04-26 15:00:09.876672] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1174152 ] 00:23:27.458 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.458 [2024-04-26 15:00:09.943509] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.458 [2024-04-26 15:00:10.016714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.027 15:00:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:28.027 15:00:10 -- common/autotest_common.sh@850 -- # return 0 00:23:28.027 15:00:10 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:28.027 15:00:10 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:28.027 15:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.027 15:00:10 -- common/autotest_common.sh@10 -- # set +x 00:23:28.027 15:00:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.027 15:00:10 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:28.027 15:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.027 15:00:10 -- common/autotest_common.sh@10 -- # set +x 00:23:28.027 15:00:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.027 15:00:10 -- host/discovery.sh@72 -- # notify_id=0 00:23:28.027 15:00:10 -- host/discovery.sh@83 -- # get_subsystem_names 00:23:28.027 15:00:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:28.027 15:00:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:28.027 15:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.027 15:00:10 -- host/discovery.sh@59 -- # sort 00:23:28.027 15:00:10 -- common/autotest_common.sh@10 -- # set +x 00:23:28.027 15:00:10 -- host/discovery.sh@59 -- # xargs 00:23:28.288 15:00:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.288 15:00:10 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:28.288 15:00:10 -- host/discovery.sh@84 -- # get_bdev_list 00:23:28.288 15:00:10 -- host/discovery.sh@55 -- # sort 00:23:28.288 15:00:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.288 15:00:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:28.288 15:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.288 15:00:10 -- common/autotest_common.sh@10 -- # set +x 00:23:28.288 15:00:10 -- host/discovery.sh@55 -- # xargs 00:23:28.288 15:00:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.288 15:00:10 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:28.288 15:00:10 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:28.288 15:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.288 15:00:10 -- common/autotest_common.sh@10 -- # set +x 00:23:28.288 15:00:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.288 15:00:10 -- host/discovery.sh@87 -- # get_subsystem_names 00:23:28.288 15:00:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:28.288 15:00:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:28.288 15:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.288 15:00:10 -- host/discovery.sh@59 -- # sort 00:23:28.288 15:00:10 -- common/autotest_common.sh@10 -- # set +x 00:23:28.288 15:00:10 -- host/discovery.sh@59 -- # xargs 00:23:28.288 15:00:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.288 15:00:10 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:28.288 15:00:10 -- host/discovery.sh@88 -- # get_bdev_list 00:23:28.288 15:00:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.288 15:00:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:28.288 15:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.288 15:00:10 -- host/discovery.sh@55 -- # sort 00:23:28.288 15:00:10 -- common/autotest_common.sh@10 -- # set +x 00:23:28.288 15:00:10 -- host/discovery.sh@55 -- # xargs 00:23:28.288 15:00:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.288 15:00:10 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:28.288 15:00:10 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:28.288 15:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.288 15:00:10 -- common/autotest_common.sh@10 -- # set +x 00:23:28.288 15:00:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.288 15:00:10 -- host/discovery.sh@91 -- # get_subsystem_names 00:23:28.288 15:00:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:28.288 15:00:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:28.288 15:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.288 15:00:10 -- host/discovery.sh@59 -- # sort 00:23:28.288 15:00:10 -- common/autotest_common.sh@10 -- # set +x 00:23:28.288 15:00:10 -- host/discovery.sh@59 -- # xargs 00:23:28.288 15:00:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.549 15:00:10 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:28.549 15:00:10 -- host/discovery.sh@92 -- # get_bdev_list 00:23:28.549 15:00:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.549 15:00:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:28.549 15:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.549 15:00:10 -- host/discovery.sh@55 -- # sort 00:23:28.549 15:00:10 -- common/autotest_common.sh@10 -- # set +x 00:23:28.549 15:00:10 -- host/discovery.sh@55 -- # xargs 00:23:28.549 15:00:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.549 15:00:11 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:28.549 15:00:11 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:28.549 15:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.549 15:00:11 -- common/autotest_common.sh@10 -- # set +x 00:23:28.549 [2024-04-26 15:00:11.028767] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.549 15:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.549 15:00:11 -- host/discovery.sh@97 -- # get_subsystem_names 00:23:28.549 15:00:11 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:28.549 15:00:11 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:28.549 15:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.549 15:00:11 -- host/discovery.sh@59 -- # sort 00:23:28.549 15:00:11 -- common/autotest_common.sh@10 -- # set +x 00:23:28.549 15:00:11 -- host/discovery.sh@59 -- # xargs 00:23:28.549 15:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.549 15:00:11 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:28.549 15:00:11 -- host/discovery.sh@98 -- # get_bdev_list 00:23:28.549 15:00:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.549 15:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.549 15:00:11 -- common/autotest_common.sh@10 -- # set +x 00:23:28.549 15:00:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:28.549 15:00:11 -- host/discovery.sh@55 -- # sort 00:23:28.549 15:00:11 -- host/discovery.sh@55 -- # xargs 00:23:28.549 15:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.549 15:00:11 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:28.549 15:00:11 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:28.549 15:00:11 -- host/discovery.sh@79 -- # expected_count=0 00:23:28.549 15:00:11 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:28.549 15:00:11 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:28.549 15:00:11 -- common/autotest_common.sh@901 -- # local max=10 00:23:28.549 15:00:11 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:28.549 15:00:11 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:28.549 15:00:11 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:28.549 15:00:11 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:28.549 15:00:11 -- host/discovery.sh@74 -- # jq '. | length' 00:23:28.549 15:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.549 15:00:11 -- common/autotest_common.sh@10 -- # set +x 00:23:28.549 15:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.549 15:00:11 -- host/discovery.sh@74 -- # notification_count=0 00:23:28.549 15:00:11 -- host/discovery.sh@75 -- # notify_id=0 00:23:28.549 15:00:11 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:28.549 15:00:11 -- common/autotest_common.sh@904 -- # return 0 00:23:28.549 15:00:11 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:28.549 15:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.549 15:00:11 -- common/autotest_common.sh@10 -- # set +x 00:23:28.549 15:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.549 15:00:11 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:28.549 15:00:11 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:28.549 15:00:11 -- common/autotest_common.sh@901 -- # local max=10 00:23:28.549 15:00:11 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:28.549 15:00:11 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:28.549 15:00:11 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:28.549 15:00:11 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:28.549 15:00:11 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:28.549 15:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.549 15:00:11 -- host/discovery.sh@59 -- # sort 00:23:28.549 15:00:11 -- common/autotest_common.sh@10 -- # set +x 00:23:28.549 15:00:11 -- host/discovery.sh@59 -- # xargs 00:23:28.549 15:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.808 15:00:11 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:23:28.808 15:00:11 -- common/autotest_common.sh@906 -- # sleep 1 00:23:29.068 [2024-04-26 15:00:11.728807] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:29.068 [2024-04-26 15:00:11.728827] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:29.068 [2024-04-26 15:00:11.728849] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:29.328 [2024-04-26 15:00:11.816128] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:29.328 [2024-04-26 15:00:11.919660] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:29.328 [2024-04-26 15:00:11.919684] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:29.588 15:00:12 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:29.588 15:00:12 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:29.588 15:00:12 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:29.588 15:00:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:29.588 15:00:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:29.588 15:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.588 15:00:12 -- host/discovery.sh@59 -- # sort 00:23:29.588 15:00:12 -- common/autotest_common.sh@10 -- # set +x 00:23:29.588 15:00:12 -- host/discovery.sh@59 -- # xargs 00:23:29.848 15:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.848 15:00:12 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.848 15:00:12 -- common/autotest_common.sh@904 -- # return 0 00:23:29.848 15:00:12 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:29.848 15:00:12 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:29.848 15:00:12 -- common/autotest_common.sh@901 -- # local max=10 00:23:29.848 15:00:12 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:29.848 15:00:12 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:29.848 15:00:12 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:29.848 15:00:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.848 15:00:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:29.848 15:00:12 -- host/discovery.sh@55 -- # sort 00:23:29.848 15:00:12 -- host/discovery.sh@55 -- # xargs 00:23:29.848 15:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.848 15:00:12 -- common/autotest_common.sh@10 -- # set +x 00:23:29.848 15:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.848 15:00:12 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:29.848 15:00:12 -- common/autotest_common.sh@904 -- # return 0 00:23:29.848 15:00:12 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:29.848 15:00:12 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:29.848 15:00:12 -- common/autotest_common.sh@901 -- # local max=10 00:23:29.848 15:00:12 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:29.848 15:00:12 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:29.848 15:00:12 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:29.848 15:00:12 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:29.848 15:00:12 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:29.848 15:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.848 15:00:12 -- host/discovery.sh@63 -- # sort -n 00:23:29.848 15:00:12 -- common/autotest_common.sh@10 -- # set +x 00:23:29.848 15:00:12 -- host/discovery.sh@63 -- # xargs 00:23:29.848 15:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.848 15:00:12 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:23:29.848 15:00:12 -- common/autotest_common.sh@904 -- # return 0 00:23:29.848 15:00:12 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:29.848 15:00:12 -- host/discovery.sh@79 -- # expected_count=1 00:23:29.848 15:00:12 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:29.849 15:00:12 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:29.849 15:00:12 -- common/autotest_common.sh@901 -- # local max=10 00:23:29.849 15:00:12 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:29.849 15:00:12 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:29.849 15:00:12 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:29.849 15:00:12 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:29.849 15:00:12 -- host/discovery.sh@74 -- # jq '. | length' 00:23:29.849 15:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.849 15:00:12 -- common/autotest_common.sh@10 -- # set +x 00:23:29.849 15:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.849 15:00:12 -- host/discovery.sh@74 -- # notification_count=1 00:23:29.849 15:00:12 -- host/discovery.sh@75 -- # notify_id=1 00:23:29.849 15:00:12 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:29.849 15:00:12 -- common/autotest_common.sh@904 -- # return 0 00:23:29.849 15:00:12 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:29.849 15:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.849 15:00:12 -- common/autotest_common.sh@10 -- # set +x 00:23:29.849 15:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.849 15:00:12 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:29.849 15:00:12 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:29.849 15:00:12 -- common/autotest_common.sh@901 -- # local max=10 00:23:29.849 15:00:12 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:29.849 15:00:12 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:29.849 15:00:12 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:29.849 15:00:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.849 15:00:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:29.849 15:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.849 15:00:12 -- common/autotest_common.sh@10 -- # set +x 00:23:29.849 15:00:12 -- host/discovery.sh@55 -- # sort 00:23:29.849 15:00:12 -- host/discovery.sh@55 -- # xargs 00:23:30.109 15:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.109 15:00:12 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:30.109 15:00:12 -- common/autotest_common.sh@904 -- # return 0 00:23:30.109 15:00:12 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:30.109 15:00:12 -- host/discovery.sh@79 -- # expected_count=1 00:23:30.109 15:00:12 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:30.109 15:00:12 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:30.109 15:00:12 -- common/autotest_common.sh@901 -- # local max=10 00:23:30.109 15:00:12 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:30.109 15:00:12 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:30.109 15:00:12 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:30.109 15:00:12 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:30.109 15:00:12 -- host/discovery.sh@74 -- # jq '. | length' 00:23:30.109 15:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.109 15:00:12 -- common/autotest_common.sh@10 -- # set +x 00:23:30.109 15:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.370 15:00:12 -- host/discovery.sh@74 -- # notification_count=1 00:23:30.370 15:00:12 -- host/discovery.sh@75 -- # notify_id=2 00:23:30.370 15:00:12 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:30.370 15:00:12 -- common/autotest_common.sh@904 -- # return 0 00:23:30.370 15:00:12 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:30.370 15:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.370 15:00:12 -- common/autotest_common.sh@10 -- # set +x 00:23:30.370 [2024-04-26 15:00:12.797456] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:30.370 [2024-04-26 15:00:12.798640] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:30.370 [2024-04-26 15:00:12.798668] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:30.370 15:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.370 15:00:12 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:30.370 15:00:12 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:30.370 15:00:12 -- common/autotest_common.sh@901 -- # local max=10 00:23:30.370 15:00:12 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:30.370 15:00:12 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:30.370 15:00:12 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:30.370 15:00:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:30.370 15:00:12 -- host/discovery.sh@59 -- # xargs 00:23:30.370 15:00:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:30.370 15:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.370 15:00:12 -- common/autotest_common.sh@10 -- # set +x 00:23:30.370 15:00:12 -- host/discovery.sh@59 -- # sort 00:23:30.370 15:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.370 15:00:12 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.370 15:00:12 -- common/autotest_common.sh@904 -- # return 0 00:23:30.370 15:00:12 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:30.370 15:00:12 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:30.370 15:00:12 -- common/autotest_common.sh@901 -- # local max=10 00:23:30.370 15:00:12 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:30.370 15:00:12 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:30.370 15:00:12 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:30.370 15:00:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.370 15:00:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:30.370 15:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.370 15:00:12 -- host/discovery.sh@55 -- # sort 00:23:30.370 15:00:12 -- common/autotest_common.sh@10 -- # set +x 00:23:30.370 15:00:12 -- host/discovery.sh@55 -- # xargs 00:23:30.370 [2024-04-26 15:00:12.887377] bdev_nvme.c:6847:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:30.370 15:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.370 15:00:12 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:30.370 15:00:12 -- common/autotest_common.sh@904 -- # return 0 00:23:30.370 15:00:12 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:30.370 15:00:12 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:30.370 15:00:12 -- common/autotest_common.sh@901 -- # local max=10 00:23:30.370 15:00:12 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:30.370 15:00:12 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:30.370 15:00:12 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:30.370 15:00:12 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:30.370 15:00:12 -- host/discovery.sh@63 -- # xargs 00:23:30.370 15:00:12 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:30.370 15:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.370 15:00:12 -- host/discovery.sh@63 -- # sort -n 00:23:30.370 15:00:12 -- common/autotest_common.sh@10 -- # set +x 00:23:30.370 15:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.370 15:00:12 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:30.370 15:00:12 -- common/autotest_common.sh@906 -- # sleep 1 00:23:30.370 [2024-04-26 15:00:12.951947] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:30.370 [2024-04-26 15:00:12.951965] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:30.370 [2024-04-26 15:00:12.951971] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:31.310 15:00:13 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:31.310 15:00:13 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:31.310 15:00:13 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:31.310 15:00:13 -- host/discovery.sh@63 -- # xargs 00:23:31.310 15:00:13 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:31.310 15:00:13 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:31.310 15:00:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.310 15:00:13 -- host/discovery.sh@63 -- # sort -n 00:23:31.310 15:00:13 -- common/autotest_common.sh@10 -- # set +x 00:23:31.572 15:00:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.572 15:00:14 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:31.572 15:00:14 -- common/autotest_common.sh@904 -- # return 0 00:23:31.572 15:00:14 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:31.572 15:00:14 -- host/discovery.sh@79 -- # expected_count=0 00:23:31.572 15:00:14 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:31.572 15:00:14 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:31.572 15:00:14 -- common/autotest_common.sh@901 -- # local max=10 00:23:31.572 15:00:14 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:31.572 15:00:14 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:31.572 15:00:14 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:31.572 15:00:14 -- host/discovery.sh@74 -- # jq '. | length' 00:23:31.572 15:00:14 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:31.572 15:00:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.572 15:00:14 -- common/autotest_common.sh@10 -- # set +x 00:23:31.572 15:00:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.572 15:00:14 -- host/discovery.sh@74 -- # notification_count=0 00:23:31.572 15:00:14 -- host/discovery.sh@75 -- # notify_id=2 00:23:31.572 15:00:14 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:31.572 15:00:14 -- common/autotest_common.sh@904 -- # return 0 00:23:31.572 15:00:14 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:31.572 15:00:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.572 15:00:14 -- common/autotest_common.sh@10 -- # set +x 00:23:31.572 [2024-04-26 15:00:14.045244] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:31.572 [2024-04-26 15:00:14.045266] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:31.572 [2024-04-26 15:00:14.049554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.572 [2024-04-26 15:00:14.049574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.573 [2024-04-26 15:00:14.049584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.573 [2024-04-26 15:00:14.049592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.573 [2024-04-26 15:00:14.049601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.573 [2024-04-26 15:00:14.049609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.573 [2024-04-26 15:00:14.049617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.573 [2024-04-26 15:00:14.049624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.573 [2024-04-26 15:00:14.049632] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b3670 is same with the state(5) to be set 00:23:31.573 15:00:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.573 15:00:14 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:31.573 15:00:14 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:31.573 15:00:14 -- common/autotest_common.sh@901 -- # local max=10 00:23:31.573 15:00:14 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:31.573 15:00:14 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:31.573 15:00:14 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:31.573 15:00:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:31.573 15:00:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:31.573 15:00:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.573 15:00:14 -- host/discovery.sh@59 -- # sort 00:23:31.573 15:00:14 -- common/autotest_common.sh@10 -- # set +x 00:23:31.573 15:00:14 -- host/discovery.sh@59 -- # xargs 00:23:31.573 [2024-04-26 15:00:14.059567] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b3670 (9): Bad file descriptor 00:23:31.573 [2024-04-26 15:00:14.069606] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:31.573 [2024-04-26 15:00:14.070098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.573 [2024-04-26 15:00:14.070439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.573 [2024-04-26 15:00:14.070453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b3670 with addr=10.0.0.2, port=4420 00:23:31.573 [2024-04-26 15:00:14.070462] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b3670 is same with the state(5) to be set 00:23:31.573 [2024-04-26 15:00:14.070480] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b3670 (9): Bad file descriptor 00:23:31.573 [2024-04-26 15:00:14.070516] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:31.573 [2024-04-26 15:00:14.070524] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:31.573 [2024-04-26 15:00:14.070533] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:31.573 [2024-04-26 15:00:14.070548] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.573 15:00:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.573 [2024-04-26 15:00:14.079661] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:31.573 [2024-04-26 15:00:14.080087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.573 [2024-04-26 15:00:14.080306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.573 [2024-04-26 15:00:14.080320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b3670 with addr=10.0.0.2, port=4420 00:23:31.573 [2024-04-26 15:00:14.080330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b3670 is same with the state(5) to be set 00:23:31.573 [2024-04-26 15:00:14.080349] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b3670 (9): Bad file descriptor 00:23:31.573 [2024-04-26 15:00:14.080360] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:31.573 [2024-04-26 15:00:14.080367] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:31.573 [2024-04-26 15:00:14.080376] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:31.573 [2024-04-26 15:00:14.080390] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.573 [2024-04-26 15:00:14.089717] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:31.573 [2024-04-26 15:00:14.089969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.573 [2024-04-26 15:00:14.090269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.573 [2024-04-26 15:00:14.090278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b3670 with addr=10.0.0.2, port=4420 00:23:31.573 [2024-04-26 15:00:14.090286] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b3670 is same with the state(5) to be set 00:23:31.573 [2024-04-26 15:00:14.090297] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b3670 (9): Bad file descriptor 00:23:31.573 [2024-04-26 15:00:14.090308] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:31.573 [2024-04-26 15:00:14.090318] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:31.573 [2024-04-26 15:00:14.090325] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:31.573 [2024-04-26 15:00:14.090336] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.573 [2024-04-26 15:00:14.099773] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:31.573 [2024-04-26 15:00:14.100044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.573 [2024-04-26 15:00:14.100365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.573 [2024-04-26 15:00:14.100374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b3670 with addr=10.0.0.2, port=4420 00:23:31.573 [2024-04-26 15:00:14.100382] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b3670 is same with the state(5) to be set 00:23:31.573 [2024-04-26 15:00:14.100393] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b3670 (9): Bad file descriptor 00:23:31.573 [2024-04-26 15:00:14.100403] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:31.573 [2024-04-26 15:00:14.100409] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:31.573 [2024-04-26 15:00:14.100416] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:31.573 [2024-04-26 15:00:14.100427] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.573 15:00:14 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.573 15:00:14 -- common/autotest_common.sh@904 -- # return 0 00:23:31.573 15:00:14 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:31.573 15:00:14 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:31.573 15:00:14 -- common/autotest_common.sh@901 -- # local max=10 00:23:31.573 15:00:14 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:31.573 15:00:14 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:31.573 [2024-04-26 15:00:14.109826] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:31.573 [2024-04-26 15:00:14.110136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.573 15:00:14 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:31.573 [2024-04-26 15:00:14.110477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.573 [2024-04-26 15:00:14.110487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b3670 with addr=10.0.0.2, port=4420 00:23:31.573 [2024-04-26 15:00:14.110494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b3670 is same with the state(5) to be set 00:23:31.573 [2024-04-26 15:00:14.110505] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b3670 (9): Bad file descriptor 00:23:31.573 [2024-04-26 15:00:14.110515] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:31.573 [2024-04-26 15:00:14.110521] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:31.573 [2024-04-26 15:00:14.110528] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:31.573 [2024-04-26 15:00:14.110538] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.573 15:00:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:31.573 15:00:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:31.573 15:00:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.573 15:00:14 -- host/discovery.sh@55 -- # sort 00:23:31.573 15:00:14 -- common/autotest_common.sh@10 -- # set +x 00:23:31.573 15:00:14 -- host/discovery.sh@55 -- # xargs 00:23:31.573 [2024-04-26 15:00:14.119879] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:31.573 [2024-04-26 15:00:14.120210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.573 [2024-04-26 15:00:14.120523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.573 [2024-04-26 15:00:14.120534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b3670 with addr=10.0.0.2, port=4420 00:23:31.573 [2024-04-26 15:00:14.120541] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b3670 is same with the state(5) to be set 00:23:31.573 [2024-04-26 15:00:14.120552] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b3670 (9): Bad file descriptor 00:23:31.573 [2024-04-26 15:00:14.120569] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:31.573 [2024-04-26 15:00:14.120576] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:31.573 [2024-04-26 15:00:14.120583] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:31.573 [2024-04-26 15:00:14.120594] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.573 [2024-04-26 15:00:14.129933] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:31.573 [2024-04-26 15:00:14.130245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.573 [2024-04-26 15:00:14.130566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.573 [2024-04-26 15:00:14.130575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b3670 with addr=10.0.0.2, port=4420 00:23:31.574 [2024-04-26 15:00:14.130582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b3670 is same with the state(5) to be set 00:23:31.574 [2024-04-26 15:00:14.130592] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b3670 (9): Bad file descriptor 00:23:31.574 [2024-04-26 15:00:14.130609] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:31.574 [2024-04-26 15:00:14.130616] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:31.574 [2024-04-26 15:00:14.130623] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:31.574 [2024-04-26 15:00:14.130633] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.574 [2024-04-26 15:00:14.132618] bdev_nvme.c:6710:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:31.574 [2024-04-26 15:00:14.132635] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:31.574 15:00:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.574 15:00:14 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:31.574 15:00:14 -- common/autotest_common.sh@904 -- # return 0 00:23:31.574 15:00:14 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:31.574 15:00:14 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:31.574 15:00:14 -- common/autotest_common.sh@901 -- # local max=10 00:23:31.574 15:00:14 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:31.574 15:00:14 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:31.574 15:00:14 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:31.574 15:00:14 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:31.574 15:00:14 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:31.574 15:00:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.574 15:00:14 -- host/discovery.sh@63 -- # sort -n 00:23:31.574 15:00:14 -- common/autotest_common.sh@10 -- # set +x 00:23:31.574 15:00:14 -- host/discovery.sh@63 -- # xargs 00:23:31.574 15:00:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.574 15:00:14 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:23:31.574 15:00:14 -- common/autotest_common.sh@904 -- # return 0 00:23:31.574 15:00:14 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:31.574 15:00:14 -- host/discovery.sh@79 -- # expected_count=0 00:23:31.574 15:00:14 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:31.574 15:00:14 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:31.574 15:00:14 -- common/autotest_common.sh@901 -- # local max=10 00:23:31.574 15:00:14 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:31.574 15:00:14 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:31.574 15:00:14 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:31.574 15:00:14 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:31.574 15:00:14 -- host/discovery.sh@74 -- # jq '. | length' 00:23:31.574 15:00:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.574 15:00:14 -- common/autotest_common.sh@10 -- # set +x 00:23:31.574 15:00:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.835 15:00:14 -- host/discovery.sh@74 -- # notification_count=0 00:23:31.835 15:00:14 -- host/discovery.sh@75 -- # notify_id=2 00:23:31.835 15:00:14 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:31.835 15:00:14 -- common/autotest_common.sh@904 -- # return 0 00:23:31.835 15:00:14 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:31.835 15:00:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.835 15:00:14 -- common/autotest_common.sh@10 -- # set +x 00:23:31.835 15:00:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.835 15:00:14 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:31.835 15:00:14 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:31.835 15:00:14 -- common/autotest_common.sh@901 -- # local max=10 00:23:31.835 15:00:14 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:31.835 15:00:14 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:31.835 15:00:14 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:31.835 15:00:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:31.835 15:00:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:31.835 15:00:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.835 15:00:14 -- common/autotest_common.sh@10 -- # set +x 00:23:31.835 15:00:14 -- host/discovery.sh@59 -- # sort 00:23:31.835 15:00:14 -- host/discovery.sh@59 -- # xargs 00:23:31.835 15:00:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.835 15:00:14 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:23:31.835 15:00:14 -- common/autotest_common.sh@904 -- # return 0 00:23:31.835 15:00:14 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:31.835 15:00:14 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:31.835 15:00:14 -- common/autotest_common.sh@901 -- # local max=10 00:23:31.835 15:00:14 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:31.835 15:00:14 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:31.835 15:00:14 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:31.835 15:00:14 -- host/discovery.sh@55 -- # xargs 00:23:31.835 15:00:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:31.835 15:00:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:31.835 15:00:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.835 15:00:14 -- host/discovery.sh@55 -- # sort 00:23:31.835 15:00:14 -- common/autotest_common.sh@10 -- # set +x 00:23:31.835 15:00:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.835 15:00:14 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:23:31.835 15:00:14 -- common/autotest_common.sh@904 -- # return 0 00:23:31.835 15:00:14 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:31.835 15:00:14 -- host/discovery.sh@79 -- # expected_count=2 00:23:31.835 15:00:14 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:31.835 15:00:14 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:31.835 15:00:14 -- common/autotest_common.sh@901 -- # local max=10 00:23:31.835 15:00:14 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:31.835 15:00:14 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:31.835 15:00:14 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:31.835 15:00:14 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:31.835 15:00:14 -- host/discovery.sh@74 -- # jq '. | length' 00:23:31.836 15:00:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.836 15:00:14 -- common/autotest_common.sh@10 -- # set +x 00:23:31.836 15:00:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.836 15:00:14 -- host/discovery.sh@74 -- # notification_count=2 00:23:31.836 15:00:14 -- host/discovery.sh@75 -- # notify_id=4 00:23:31.836 15:00:14 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:31.836 15:00:14 -- common/autotest_common.sh@904 -- # return 0 00:23:31.836 15:00:14 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:31.836 15:00:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.836 15:00:14 -- common/autotest_common.sh@10 -- # set +x 00:23:33.220 [2024-04-26 15:00:15.498821] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:33.220 [2024-04-26 15:00:15.498842] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:33.220 [2024-04-26 15:00:15.498856] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:33.220 [2024-04-26 15:00:15.587139] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:33.220 [2024-04-26 15:00:15.650973] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:33.220 [2024-04-26 15:00:15.651003] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:33.220 15:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.220 15:00:15 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:33.220 15:00:15 -- common/autotest_common.sh@638 -- # local es=0 00:23:33.220 15:00:15 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:33.220 15:00:15 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:33.220 15:00:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:33.220 15:00:15 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:33.220 15:00:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:33.220 15:00:15 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:33.220 15:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.220 15:00:15 -- common/autotest_common.sh@10 -- # set +x 00:23:33.220 request: 00:23:33.220 { 00:23:33.220 "name": "nvme", 00:23:33.220 "trtype": "tcp", 00:23:33.220 "traddr": "10.0.0.2", 00:23:33.220 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:33.220 "adrfam": "ipv4", 00:23:33.220 "trsvcid": "8009", 00:23:33.220 "wait_for_attach": true, 00:23:33.220 "method": "bdev_nvme_start_discovery", 00:23:33.220 "req_id": 1 00:23:33.220 } 00:23:33.220 Got JSON-RPC error response 00:23:33.220 response: 00:23:33.220 { 00:23:33.220 "code": -17, 00:23:33.220 "message": "File exists" 00:23:33.220 } 00:23:33.220 15:00:15 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:33.220 15:00:15 -- common/autotest_common.sh@641 -- # es=1 00:23:33.220 15:00:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:33.220 15:00:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:33.220 15:00:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:33.220 15:00:15 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:33.220 15:00:15 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:33.220 15:00:15 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:33.220 15:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.220 15:00:15 -- host/discovery.sh@67 -- # sort 00:23:33.220 15:00:15 -- common/autotest_common.sh@10 -- # set +x 00:23:33.220 15:00:15 -- host/discovery.sh@67 -- # xargs 00:23:33.220 15:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.220 15:00:15 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:33.220 15:00:15 -- host/discovery.sh@146 -- # get_bdev_list 00:23:33.220 15:00:15 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.220 15:00:15 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:33.220 15:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.220 15:00:15 -- host/discovery.sh@55 -- # sort 00:23:33.220 15:00:15 -- common/autotest_common.sh@10 -- # set +x 00:23:33.220 15:00:15 -- host/discovery.sh@55 -- # xargs 00:23:33.220 15:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.220 15:00:15 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:33.220 15:00:15 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:33.220 15:00:15 -- common/autotest_common.sh@638 -- # local es=0 00:23:33.220 15:00:15 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:33.220 15:00:15 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:33.220 15:00:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:33.220 15:00:15 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:33.220 15:00:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:33.220 15:00:15 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:33.220 15:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.220 15:00:15 -- common/autotest_common.sh@10 -- # set +x 00:23:33.220 request: 00:23:33.220 { 00:23:33.220 "name": "nvme_second", 00:23:33.220 "trtype": "tcp", 00:23:33.220 "traddr": "10.0.0.2", 00:23:33.220 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:33.220 "adrfam": "ipv4", 00:23:33.220 "trsvcid": "8009", 00:23:33.220 "wait_for_attach": true, 00:23:33.220 "method": "bdev_nvme_start_discovery", 00:23:33.220 "req_id": 1 00:23:33.220 } 00:23:33.220 Got JSON-RPC error response 00:23:33.220 response: 00:23:33.220 { 00:23:33.220 "code": -17, 00:23:33.220 "message": "File exists" 00:23:33.220 } 00:23:33.220 15:00:15 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:33.220 15:00:15 -- common/autotest_common.sh@641 -- # es=1 00:23:33.220 15:00:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:33.220 15:00:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:33.220 15:00:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:33.220 15:00:15 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:33.220 15:00:15 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:33.220 15:00:15 -- host/discovery.sh@67 -- # xargs 00:23:33.220 15:00:15 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:33.220 15:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.220 15:00:15 -- common/autotest_common.sh@10 -- # set +x 00:23:33.220 15:00:15 -- host/discovery.sh@67 -- # sort 00:23:33.220 15:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.220 15:00:15 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:33.220 15:00:15 -- host/discovery.sh@152 -- # get_bdev_list 00:23:33.220 15:00:15 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.220 15:00:15 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:33.220 15:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.220 15:00:15 -- common/autotest_common.sh@10 -- # set +x 00:23:33.220 15:00:15 -- host/discovery.sh@55 -- # sort 00:23:33.220 15:00:15 -- host/discovery.sh@55 -- # xargs 00:23:33.220 15:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.481 15:00:15 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:33.481 15:00:15 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:33.481 15:00:15 -- common/autotest_common.sh@638 -- # local es=0 00:23:33.481 15:00:15 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:33.481 15:00:15 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:33.481 15:00:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:33.481 15:00:15 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:33.481 15:00:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:33.481 15:00:15 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:33.481 15:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.481 15:00:15 -- common/autotest_common.sh@10 -- # set +x 00:23:34.421 [2024-04-26 15:00:16.914498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.421 [2024-04-26 15:00:16.914805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.421 [2024-04-26 15:00:16.914816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8af890 with addr=10.0.0.2, port=8010 00:23:34.421 [2024-04-26 15:00:16.914828] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:34.421 [2024-04-26 15:00:16.914835] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:34.421 [2024-04-26 15:00:16.914847] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:35.360 [2024-04-26 15:00:17.916843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.360 [2024-04-26 15:00:17.917140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.360 [2024-04-26 15:00:17.917150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8cd190 with addr=10.0.0.2, port=8010 00:23:35.360 [2024-04-26 15:00:17.917161] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:35.360 [2024-04-26 15:00:17.917168] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:35.360 [2024-04-26 15:00:17.917174] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:36.300 [2024-04-26 15:00:18.918820] bdev_nvme.c:6966:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:36.300 request: 00:23:36.300 { 00:23:36.300 "name": "nvme_second", 00:23:36.300 "trtype": "tcp", 00:23:36.300 "traddr": "10.0.0.2", 00:23:36.300 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:36.300 "adrfam": "ipv4", 00:23:36.300 "trsvcid": "8010", 00:23:36.300 "attach_timeout_ms": 3000, 00:23:36.300 "method": "bdev_nvme_start_discovery", 00:23:36.300 "req_id": 1 00:23:36.300 } 00:23:36.300 Got JSON-RPC error response 00:23:36.300 response: 00:23:36.300 { 00:23:36.300 "code": -110, 00:23:36.300 "message": "Connection timed out" 00:23:36.300 } 00:23:36.300 15:00:18 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:36.300 15:00:18 -- common/autotest_common.sh@641 -- # es=1 00:23:36.300 15:00:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:36.300 15:00:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:36.300 15:00:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:36.300 15:00:18 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:36.300 15:00:18 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:36.300 15:00:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.300 15:00:18 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:36.300 15:00:18 -- common/autotest_common.sh@10 -- # set +x 00:23:36.300 15:00:18 -- host/discovery.sh@67 -- # sort 00:23:36.300 15:00:18 -- host/discovery.sh@67 -- # xargs 00:23:36.300 15:00:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.560 15:00:18 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:36.560 15:00:18 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:36.560 15:00:18 -- host/discovery.sh@161 -- # kill 1174152 00:23:36.560 15:00:18 -- host/discovery.sh@162 -- # nvmftestfini 00:23:36.560 15:00:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:36.560 15:00:18 -- nvmf/common.sh@117 -- # sync 00:23:36.560 15:00:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:36.560 15:00:18 -- nvmf/common.sh@120 -- # set +e 00:23:36.560 15:00:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:36.560 15:00:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:36.560 rmmod nvme_tcp 00:23:36.560 rmmod nvme_fabrics 00:23:36.560 rmmod nvme_keyring 00:23:36.560 15:00:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:36.560 15:00:19 -- nvmf/common.sh@124 -- # set -e 00:23:36.560 15:00:19 -- nvmf/common.sh@125 -- # return 0 00:23:36.560 15:00:19 -- nvmf/common.sh@478 -- # '[' -n 1173957 ']' 00:23:36.560 15:00:19 -- nvmf/common.sh@479 -- # killprocess 1173957 00:23:36.560 15:00:19 -- common/autotest_common.sh@936 -- # '[' -z 1173957 ']' 00:23:36.560 15:00:19 -- common/autotest_common.sh@940 -- # kill -0 1173957 00:23:36.560 15:00:19 -- common/autotest_common.sh@941 -- # uname 00:23:36.560 15:00:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:36.560 15:00:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1173957 00:23:36.560 15:00:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:36.560 15:00:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:36.560 15:00:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1173957' 00:23:36.560 killing process with pid 1173957 00:23:36.560 15:00:19 -- common/autotest_common.sh@955 -- # kill 1173957 00:23:36.560 15:00:19 -- common/autotest_common.sh@960 -- # wait 1173957 00:23:36.560 15:00:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:36.560 15:00:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:36.560 15:00:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:36.560 15:00:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:36.560 15:00:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:36.560 15:00:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.560 15:00:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.560 15:00:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.106 15:00:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:39.106 00:23:39.106 real 0m19.756s 00:23:39.106 user 0m23.242s 00:23:39.106 sys 0m6.763s 00:23:39.106 15:00:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:39.106 15:00:21 -- common/autotest_common.sh@10 -- # set +x 00:23:39.106 ************************************ 00:23:39.106 END TEST nvmf_discovery 00:23:39.106 ************************************ 00:23:39.106 15:00:21 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:39.106 15:00:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:39.106 15:00:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:39.106 15:00:21 -- common/autotest_common.sh@10 -- # set +x 00:23:39.106 ************************************ 00:23:39.106 START TEST nvmf_discovery_remove_ifc 00:23:39.106 ************************************ 00:23:39.106 15:00:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:39.106 * Looking for test storage... 00:23:39.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:39.106 15:00:21 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:39.106 15:00:21 -- nvmf/common.sh@7 -- # uname -s 00:23:39.106 15:00:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.106 15:00:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.106 15:00:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.106 15:00:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.106 15:00:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.106 15:00:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.106 15:00:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.106 15:00:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.106 15:00:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.106 15:00:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.106 15:00:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:39.106 15:00:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:39.106 15:00:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.106 15:00:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.106 15:00:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:39.106 15:00:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.106 15:00:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:39.106 15:00:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.106 15:00:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.106 15:00:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.106 15:00:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.106 15:00:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.106 15:00:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.106 15:00:21 -- paths/export.sh@5 -- # export PATH 00:23:39.106 15:00:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.106 15:00:21 -- nvmf/common.sh@47 -- # : 0 00:23:39.106 15:00:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:39.106 15:00:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:39.106 15:00:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.106 15:00:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.106 15:00:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.106 15:00:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:39.106 15:00:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:39.106 15:00:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:39.106 15:00:21 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:39.106 15:00:21 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:39.106 15:00:21 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:39.106 15:00:21 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:39.106 15:00:21 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:39.106 15:00:21 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:39.106 15:00:21 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:39.106 15:00:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:39.106 15:00:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.106 15:00:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:39.106 15:00:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:39.106 15:00:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:39.106 15:00:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.106 15:00:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.106 15:00:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.106 15:00:21 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:39.106 15:00:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:39.106 15:00:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:39.106 15:00:21 -- common/autotest_common.sh@10 -- # set +x 00:23:47.248 15:00:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:47.248 15:00:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:47.248 15:00:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:47.248 15:00:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:47.248 15:00:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:47.248 15:00:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:47.248 15:00:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:47.248 15:00:28 -- nvmf/common.sh@295 -- # net_devs=() 00:23:47.248 15:00:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:47.248 15:00:28 -- nvmf/common.sh@296 -- # e810=() 00:23:47.248 15:00:28 -- nvmf/common.sh@296 -- # local -ga e810 00:23:47.248 15:00:28 -- nvmf/common.sh@297 -- # x722=() 00:23:47.248 15:00:28 -- nvmf/common.sh@297 -- # local -ga x722 00:23:47.248 15:00:28 -- nvmf/common.sh@298 -- # mlx=() 00:23:47.248 15:00:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:47.248 15:00:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:47.248 15:00:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:47.248 15:00:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:47.248 15:00:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:47.248 15:00:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:47.248 15:00:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:47.248 15:00:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:47.248 15:00:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:47.248 15:00:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:47.248 15:00:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:47.248 15:00:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:47.248 15:00:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:47.248 15:00:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:47.248 15:00:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:47.248 15:00:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:47.248 15:00:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:47.248 15:00:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:47.248 15:00:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:47.248 15:00:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:47.248 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:47.248 15:00:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:47.248 15:00:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:47.248 15:00:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.248 15:00:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.248 15:00:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:47.248 15:00:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:47.249 15:00:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:47.249 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:47.249 15:00:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:47.249 15:00:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:47.249 15:00:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.249 15:00:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.249 15:00:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:47.249 15:00:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:47.249 15:00:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:47.249 15:00:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:47.249 15:00:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:47.249 15:00:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.249 15:00:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:47.249 15:00:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.249 15:00:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:47.249 Found net devices under 0000:31:00.0: cvl_0_0 00:23:47.249 15:00:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.249 15:00:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:47.249 15:00:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.249 15:00:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:47.249 15:00:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.249 15:00:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:47.249 Found net devices under 0000:31:00.1: cvl_0_1 00:23:47.249 15:00:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.249 15:00:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:47.249 15:00:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:47.249 15:00:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:47.249 15:00:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:47.249 15:00:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:47.249 15:00:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:47.249 15:00:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:47.249 15:00:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:47.249 15:00:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:47.249 15:00:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:47.249 15:00:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:47.249 15:00:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:47.249 15:00:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:47.249 15:00:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:47.249 15:00:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:47.249 15:00:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:47.249 15:00:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:47.249 15:00:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:47.249 15:00:28 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:47.249 15:00:28 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:47.249 15:00:28 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:47.249 15:00:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:47.249 15:00:28 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:47.249 15:00:28 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:47.249 15:00:28 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:47.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:47.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.709 ms 00:23:47.249 00:23:47.249 --- 10.0.0.2 ping statistics --- 00:23:47.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.249 rtt min/avg/max/mdev = 0.709/0.709/0.709/0.000 ms 00:23:47.249 15:00:28 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:47.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:47.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:23:47.249 00:23:47.249 --- 10.0.0.1 ping statistics --- 00:23:47.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.249 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:23:47.249 15:00:28 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:47.249 15:00:28 -- nvmf/common.sh@411 -- # return 0 00:23:47.249 15:00:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:47.249 15:00:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:47.249 15:00:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:47.249 15:00:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:47.249 15:00:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:47.249 15:00:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:47.249 15:00:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:47.249 15:00:28 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:47.249 15:00:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:47.249 15:00:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:47.249 15:00:28 -- common/autotest_common.sh@10 -- # set +x 00:23:47.249 15:00:28 -- nvmf/common.sh@470 -- # nvmfpid=1180243 00:23:47.249 15:00:28 -- nvmf/common.sh@471 -- # waitforlisten 1180243 00:23:47.249 15:00:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:47.249 15:00:28 -- common/autotest_common.sh@817 -- # '[' -z 1180243 ']' 00:23:47.249 15:00:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.249 15:00:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:47.249 15:00:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.249 15:00:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:47.249 15:00:28 -- common/autotest_common.sh@10 -- # set +x 00:23:47.249 [2024-04-26 15:00:28.883733] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:47.249 [2024-04-26 15:00:28.883800] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.249 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.249 [2024-04-26 15:00:28.973925] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.249 [2024-04-26 15:00:29.065758] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:47.249 [2024-04-26 15:00:29.065817] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:47.249 [2024-04-26 15:00:29.065826] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:47.249 [2024-04-26 15:00:29.065833] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:47.249 [2024-04-26 15:00:29.065850] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:47.249 [2024-04-26 15:00:29.065881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.249 15:00:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:47.249 15:00:29 -- common/autotest_common.sh@850 -- # return 0 00:23:47.249 15:00:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:47.249 15:00:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:47.249 15:00:29 -- common/autotest_common.sh@10 -- # set +x 00:23:47.249 15:00:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.249 15:00:29 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:47.249 15:00:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:47.249 15:00:29 -- common/autotest_common.sh@10 -- # set +x 00:23:47.249 [2024-04-26 15:00:29.712816] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.249 [2024-04-26 15:00:29.721034] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:47.249 null0 00:23:47.249 [2024-04-26 15:00:29.753013] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.249 15:00:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:47.249 15:00:29 -- host/discovery_remove_ifc.sh@59 -- # hostpid=1180586 00:23:47.249 15:00:29 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1180586 /tmp/host.sock 00:23:47.249 15:00:29 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:47.249 15:00:29 -- common/autotest_common.sh@817 -- # '[' -z 1180586 ']' 00:23:47.249 15:00:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:23:47.249 15:00:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:47.249 15:00:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:47.249 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:47.249 15:00:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:47.249 15:00:29 -- common/autotest_common.sh@10 -- # set +x 00:23:47.249 [2024-04-26 15:00:29.824777] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:47.249 [2024-04-26 15:00:29.824845] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1180586 ] 00:23:47.249 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.249 [2024-04-26 15:00:29.889187] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.511 [2024-04-26 15:00:29.961667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.083 15:00:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:48.083 15:00:30 -- common/autotest_common.sh@850 -- # return 0 00:23:48.083 15:00:30 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:48.083 15:00:30 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:48.083 15:00:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.083 15:00:30 -- common/autotest_common.sh@10 -- # set +x 00:23:48.083 15:00:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.083 15:00:30 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:48.083 15:00:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.083 15:00:30 -- common/autotest_common.sh@10 -- # set +x 00:23:48.083 15:00:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.083 15:00:30 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:48.083 15:00:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.083 15:00:30 -- common/autotest_common.sh@10 -- # set +x 00:23:49.468 [2024-04-26 15:00:31.722775] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:49.468 [2024-04-26 15:00:31.722797] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:49.468 [2024-04-26 15:00:31.722810] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:49.468 [2024-04-26 15:00:31.852238] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:49.468 [2024-04-26 15:00:31.953558] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:49.468 [2024-04-26 15:00:31.953605] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:49.468 [2024-04-26 15:00:31.953624] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:49.468 [2024-04-26 15:00:31.953638] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:49.468 [2024-04-26 15:00:31.953658] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:49.468 15:00:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.468 15:00:31 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:49.468 15:00:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:49.468 15:00:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:49.468 [2024-04-26 15:00:31.960675] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xb9ced0 was disconnected and freed. delete nvme_qpair. 00:23:49.468 15:00:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:49.468 15:00:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.468 15:00:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:49.468 15:00:31 -- common/autotest_common.sh@10 -- # set +x 00:23:49.468 15:00:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:49.468 15:00:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.468 15:00:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:49.468 15:00:32 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:49.468 15:00:32 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:49.728 15:00:32 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:49.728 15:00:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:49.728 15:00:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:49.728 15:00:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:49.728 15:00:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:49.728 15:00:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.728 15:00:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:49.728 15:00:32 -- common/autotest_common.sh@10 -- # set +x 00:23:49.728 15:00:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.728 15:00:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:49.728 15:00:32 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:50.670 15:00:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:50.670 15:00:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:50.670 15:00:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:50.670 15:00:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.670 15:00:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:50.670 15:00:33 -- common/autotest_common.sh@10 -- # set +x 00:23:50.670 15:00:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:50.670 15:00:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.670 15:00:33 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:50.670 15:00:33 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:51.612 15:00:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:51.612 15:00:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:51.612 15:00:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:51.612 15:00:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:51.612 15:00:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.612 15:00:34 -- common/autotest_common.sh@10 -- # set +x 00:23:51.612 15:00:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:51.872 15:00:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.872 15:00:34 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:51.872 15:00:34 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:52.811 15:00:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:52.811 15:00:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:52.811 15:00:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:52.811 15:00:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.811 15:00:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:52.811 15:00:35 -- common/autotest_common.sh@10 -- # set +x 00:23:52.811 15:00:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:52.811 15:00:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.811 15:00:35 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:52.811 15:00:35 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:53.759 15:00:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:53.759 15:00:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:53.759 15:00:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:53.759 15:00:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:53.759 15:00:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:53.759 15:00:36 -- common/autotest_common.sh@10 -- # set +x 00:23:53.759 15:00:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:53.759 15:00:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:53.759 15:00:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:53.759 15:00:36 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:54.753 [2024-04-26 15:00:37.394284] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:54.753 [2024-04-26 15:00:37.394322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.753 [2024-04-26 15:00:37.394333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.753 [2024-04-26 15:00:37.394343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.753 [2024-04-26 15:00:37.394351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.753 [2024-04-26 15:00:37.394359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.753 [2024-04-26 15:00:37.394367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.753 [2024-04-26 15:00:37.394375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.753 [2024-04-26 15:00:37.394382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.753 [2024-04-26 15:00:37.394390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.753 [2024-04-26 15:00:37.394398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.753 [2024-04-26 15:00:37.394405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb633f0 is same with the state(5) to be set 00:23:54.753 [2024-04-26 15:00:37.404306] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb633f0 (9): Bad file descriptor 00:23:55.041 [2024-04-26 15:00:37.414345] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:55.041 15:00:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:55.041 15:00:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:55.041 15:00:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:55.041 15:00:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:55.041 15:00:37 -- common/autotest_common.sh@10 -- # set +x 00:23:55.041 15:00:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:55.041 15:00:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:55.979 [2024-04-26 15:00:38.472875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:56.930 [2024-04-26 15:00:39.496890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:56.930 [2024-04-26 15:00:39.496930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb633f0 with addr=10.0.0.2, port=4420 00:23:56.930 [2024-04-26 15:00:39.496944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb633f0 is same with the state(5) to be set 00:23:56.930 [2024-04-26 15:00:39.497316] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb633f0 (9): Bad file descriptor 00:23:56.930 [2024-04-26 15:00:39.497339] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:56.930 [2024-04-26 15:00:39.497360] bdev_nvme.c:6674:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:56.930 [2024-04-26 15:00:39.497382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.930 [2024-04-26 15:00:39.497391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.930 [2024-04-26 15:00:39.497401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.930 [2024-04-26 15:00:39.497409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.930 [2024-04-26 15:00:39.497417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.930 [2024-04-26 15:00:39.497424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.930 [2024-04-26 15:00:39.497432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.930 [2024-04-26 15:00:39.497439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.930 [2024-04-26 15:00:39.497448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.930 [2024-04-26 15:00:39.497455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.930 [2024-04-26 15:00:39.497462] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:56.930 [2024-04-26 15:00:39.497968] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb63800 (9): Bad file descriptor 00:23:56.930 [2024-04-26 15:00:39.498980] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:56.930 [2024-04-26 15:00:39.498991] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:56.930 15:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:56.930 15:00:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:56.930 15:00:39 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:57.871 15:00:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:57.871 15:00:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.871 15:00:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:57.871 15:00:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.871 15:00:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:57.871 15:00:40 -- common/autotest_common.sh@10 -- # set +x 00:23:57.871 15:00:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:58.132 15:00:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.132 15:00:40 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:58.132 15:00:40 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:58.132 15:00:40 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:58.132 15:00:40 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:58.132 15:00:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:58.132 15:00:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:58.132 15:00:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:58.132 15:00:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:58.132 15:00:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.132 15:00:40 -- common/autotest_common.sh@10 -- # set +x 00:23:58.132 15:00:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:58.132 15:00:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.132 15:00:40 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:58.132 15:00:40 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:59.074 [2024-04-26 15:00:41.558029] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:59.074 [2024-04-26 15:00:41.558050] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:59.074 [2024-04-26 15:00:41.558064] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:59.074 [2024-04-26 15:00:41.647332] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:59.335 [2024-04-26 15:00:41.747180] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:59.335 [2024-04-26 15:00:41.747219] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:59.335 [2024-04-26 15:00:41.747238] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:59.335 [2024-04-26 15:00:41.747252] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:59.335 [2024-04-26 15:00:41.747260] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:59.335 15:00:41 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:59.335 15:00:41 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:59.335 15:00:41 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:59.335 15:00:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.335 15:00:41 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:59.335 15:00:41 -- common/autotest_common.sh@10 -- # set +x 00:23:59.335 15:00:41 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:59.335 [2024-04-26 15:00:41.754708] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xb73cc0 was disconnected and freed. delete nvme_qpair. 00:23:59.335 15:00:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.335 15:00:41 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:59.335 15:00:41 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:59.335 15:00:41 -- host/discovery_remove_ifc.sh@90 -- # killprocess 1180586 00:23:59.335 15:00:41 -- common/autotest_common.sh@936 -- # '[' -z 1180586 ']' 00:23:59.335 15:00:41 -- common/autotest_common.sh@940 -- # kill -0 1180586 00:23:59.335 15:00:41 -- common/autotest_common.sh@941 -- # uname 00:23:59.335 15:00:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:59.335 15:00:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1180586 00:23:59.335 15:00:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:59.335 15:00:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:59.335 15:00:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1180586' 00:23:59.335 killing process with pid 1180586 00:23:59.335 15:00:41 -- common/autotest_common.sh@955 -- # kill 1180586 00:23:59.335 15:00:41 -- common/autotest_common.sh@960 -- # wait 1180586 00:23:59.335 15:00:41 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:59.335 15:00:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:59.335 15:00:41 -- nvmf/common.sh@117 -- # sync 00:23:59.335 15:00:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:59.335 15:00:41 -- nvmf/common.sh@120 -- # set +e 00:23:59.335 15:00:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:59.335 15:00:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:59.335 rmmod nvme_tcp 00:23:59.596 rmmod nvme_fabrics 00:23:59.596 rmmod nvme_keyring 00:23:59.596 15:00:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:59.596 15:00:42 -- nvmf/common.sh@124 -- # set -e 00:23:59.596 15:00:42 -- nvmf/common.sh@125 -- # return 0 00:23:59.596 15:00:42 -- nvmf/common.sh@478 -- # '[' -n 1180243 ']' 00:23:59.596 15:00:42 -- nvmf/common.sh@479 -- # killprocess 1180243 00:23:59.596 15:00:42 -- common/autotest_common.sh@936 -- # '[' -z 1180243 ']' 00:23:59.596 15:00:42 -- common/autotest_common.sh@940 -- # kill -0 1180243 00:23:59.596 15:00:42 -- common/autotest_common.sh@941 -- # uname 00:23:59.596 15:00:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:59.596 15:00:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1180243 00:23:59.596 15:00:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:59.596 15:00:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:59.596 15:00:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1180243' 00:23:59.596 killing process with pid 1180243 00:23:59.596 15:00:42 -- common/autotest_common.sh@955 -- # kill 1180243 00:23:59.596 15:00:42 -- common/autotest_common.sh@960 -- # wait 1180243 00:23:59.596 15:00:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:59.596 15:00:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:59.596 15:00:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:59.596 15:00:42 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:59.596 15:00:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:59.596 15:00:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.596 15:00:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.596 15:00:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.149 15:00:44 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:02.149 00:24:02.149 real 0m22.823s 00:24:02.149 user 0m25.813s 00:24:02.149 sys 0m6.711s 00:24:02.149 15:00:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:02.149 15:00:44 -- common/autotest_common.sh@10 -- # set +x 00:24:02.149 ************************************ 00:24:02.149 END TEST nvmf_discovery_remove_ifc 00:24:02.149 ************************************ 00:24:02.149 15:00:44 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:02.149 15:00:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:02.149 15:00:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:02.149 15:00:44 -- common/autotest_common.sh@10 -- # set +x 00:24:02.149 ************************************ 00:24:02.149 START TEST nvmf_identify_kernel_target 00:24:02.149 ************************************ 00:24:02.149 15:00:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:02.149 * Looking for test storage... 00:24:02.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:02.149 15:00:44 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.149 15:00:44 -- nvmf/common.sh@7 -- # uname -s 00:24:02.149 15:00:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.149 15:00:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.149 15:00:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.149 15:00:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.149 15:00:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.149 15:00:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.149 15:00:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.149 15:00:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.149 15:00:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.149 15:00:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.149 15:00:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:02.149 15:00:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:02.149 15:00:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.149 15:00:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.149 15:00:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.149 15:00:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:02.149 15:00:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:02.149 15:00:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.149 15:00:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.149 15:00:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.149 15:00:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.149 15:00:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.149 15:00:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.149 15:00:44 -- paths/export.sh@5 -- # export PATH 00:24:02.149 15:00:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.149 15:00:44 -- nvmf/common.sh@47 -- # : 0 00:24:02.149 15:00:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:02.149 15:00:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:02.149 15:00:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:02.149 15:00:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.149 15:00:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.149 15:00:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:02.149 15:00:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:02.149 15:00:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:02.149 15:00:44 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:02.149 15:00:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:02.149 15:00:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.149 15:00:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:02.149 15:00:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:02.149 15:00:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:02.149 15:00:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.149 15:00:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.149 15:00:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.149 15:00:44 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:02.149 15:00:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:02.149 15:00:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:02.149 15:00:44 -- common/autotest_common.sh@10 -- # set +x 00:24:10.290 15:00:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:10.290 15:00:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:10.290 15:00:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:10.290 15:00:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:10.290 15:00:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:10.290 15:00:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:10.290 15:00:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:10.290 15:00:51 -- nvmf/common.sh@295 -- # net_devs=() 00:24:10.290 15:00:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:10.290 15:00:51 -- nvmf/common.sh@296 -- # e810=() 00:24:10.290 15:00:51 -- nvmf/common.sh@296 -- # local -ga e810 00:24:10.290 15:00:51 -- nvmf/common.sh@297 -- # x722=() 00:24:10.290 15:00:51 -- nvmf/common.sh@297 -- # local -ga x722 00:24:10.290 15:00:51 -- nvmf/common.sh@298 -- # mlx=() 00:24:10.290 15:00:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:10.290 15:00:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.290 15:00:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.290 15:00:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.290 15:00:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.290 15:00:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.290 15:00:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.290 15:00:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.290 15:00:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.290 15:00:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.290 15:00:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.290 15:00:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.290 15:00:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:10.290 15:00:51 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:10.290 15:00:51 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:10.290 15:00:51 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:10.290 15:00:51 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:10.290 15:00:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:10.290 15:00:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.290 15:00:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:10.290 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:10.290 15:00:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:10.290 15:00:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:10.290 15:00:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.291 15:00:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.291 15:00:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:10.291 15:00:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.291 15:00:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:10.291 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:10.291 15:00:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:10.291 15:00:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:10.291 15:00:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.291 15:00:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.291 15:00:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:10.291 15:00:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:10.291 15:00:51 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:10.291 15:00:51 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:10.291 15:00:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.291 15:00:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.291 15:00:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:10.291 15:00:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.291 15:00:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:10.291 Found net devices under 0000:31:00.0: cvl_0_0 00:24:10.291 15:00:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.291 15:00:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.291 15:00:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.291 15:00:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:10.291 15:00:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.291 15:00:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:10.291 Found net devices under 0000:31:00.1: cvl_0_1 00:24:10.291 15:00:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.291 15:00:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:10.291 15:00:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:10.291 15:00:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:10.291 15:00:51 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:10.291 15:00:51 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:10.291 15:00:51 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.291 15:00:51 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.291 15:00:51 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.291 15:00:51 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:10.291 15:00:51 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:10.291 15:00:51 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:10.291 15:00:51 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:10.291 15:00:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:10.291 15:00:51 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.291 15:00:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:10.291 15:00:51 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:10.291 15:00:51 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:10.291 15:00:51 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.291 15:00:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.291 15:00:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.291 15:00:51 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:10.291 15:00:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.291 15:00:51 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.291 15:00:51 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.291 15:00:51 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:10.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:24:10.291 00:24:10.291 --- 10.0.0.2 ping statistics --- 00:24:10.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.291 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:24:10.291 15:00:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:24:10.291 00:24:10.291 --- 10.0.0.1 ping statistics --- 00:24:10.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.291 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:24:10.291 15:00:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.291 15:00:51 -- nvmf/common.sh@411 -- # return 0 00:24:10.291 15:00:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:10.291 15:00:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.291 15:00:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:10.291 15:00:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:10.291 15:00:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.291 15:00:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:10.291 15:00:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:10.291 15:00:51 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:10.291 15:00:51 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:10.291 15:00:51 -- nvmf/common.sh@717 -- # local ip 00:24:10.291 15:00:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:10.291 15:00:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:10.291 15:00:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.291 15:00:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.291 15:00:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:10.291 15:00:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.291 15:00:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:10.291 15:00:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:10.291 15:00:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:10.291 15:00:51 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:10.291 15:00:51 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:10.291 15:00:51 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:10.291 15:00:51 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:24:10.291 15:00:51 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:10.291 15:00:51 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:10.291 15:00:51 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:10.291 15:00:51 -- nvmf/common.sh@628 -- # local block nvme 00:24:10.291 15:00:51 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:24:10.291 15:00:51 -- nvmf/common.sh@631 -- # modprobe nvmet 00:24:10.291 15:00:51 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:10.291 15:00:51 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:12.832 Waiting for block devices as requested 00:24:12.832 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:24:12.832 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:24:12.832 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:24:12.832 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:24:13.093 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:24:13.093 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:24:13.093 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:24:13.093 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:24:13.354 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:24:13.354 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:24:13.616 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:24:13.616 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:24:13.616 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:24:13.875 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:24:13.875 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:24:13.875 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:24:13.875 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:24:14.141 15:00:56 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:14.141 15:00:56 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:14.141 15:00:56 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:24:14.141 15:00:56 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:14.141 15:00:56 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:14.141 15:00:56 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:14.141 15:00:56 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:24:14.141 15:00:56 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:14.141 15:00:56 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:14.401 No valid GPT data, bailing 00:24:14.401 15:00:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:14.401 15:00:56 -- scripts/common.sh@391 -- # pt= 00:24:14.401 15:00:56 -- scripts/common.sh@392 -- # return 1 00:24:14.401 15:00:56 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:24:14.401 15:00:56 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:24:14.401 15:00:56 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:14.401 15:00:56 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:14.401 15:00:56 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:14.401 15:00:56 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:14.401 15:00:56 -- nvmf/common.sh@656 -- # echo 1 00:24:14.401 15:00:56 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:24:14.401 15:00:56 -- nvmf/common.sh@658 -- # echo 1 00:24:14.401 15:00:56 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:24:14.401 15:00:56 -- nvmf/common.sh@661 -- # echo tcp 00:24:14.401 15:00:56 -- nvmf/common.sh@662 -- # echo 4420 00:24:14.401 15:00:56 -- nvmf/common.sh@663 -- # echo ipv4 00:24:14.401 15:00:56 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:14.401 15:00:56 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:24:14.401 00:24:14.401 Discovery Log Number of Records 2, Generation counter 2 00:24:14.401 =====Discovery Log Entry 0====== 00:24:14.401 trtype: tcp 00:24:14.401 adrfam: ipv4 00:24:14.401 subtype: current discovery subsystem 00:24:14.401 treq: not specified, sq flow control disable supported 00:24:14.401 portid: 1 00:24:14.401 trsvcid: 4420 00:24:14.401 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:14.401 traddr: 10.0.0.1 00:24:14.401 eflags: none 00:24:14.401 sectype: none 00:24:14.401 =====Discovery Log Entry 1====== 00:24:14.401 trtype: tcp 00:24:14.401 adrfam: ipv4 00:24:14.401 subtype: nvme subsystem 00:24:14.401 treq: not specified, sq flow control disable supported 00:24:14.401 portid: 1 00:24:14.401 trsvcid: 4420 00:24:14.401 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:14.401 traddr: 10.0.0.1 00:24:14.401 eflags: none 00:24:14.401 sectype: none 00:24:14.401 15:00:56 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:14.401 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:14.401 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.401 ===================================================== 00:24:14.401 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:14.401 ===================================================== 00:24:14.401 Controller Capabilities/Features 00:24:14.401 ================================ 00:24:14.401 Vendor ID: 0000 00:24:14.401 Subsystem Vendor ID: 0000 00:24:14.401 Serial Number: 697ae313d65e75e30ba0 00:24:14.401 Model Number: Linux 00:24:14.401 Firmware Version: 6.7.0-68 00:24:14.401 Recommended Arb Burst: 0 00:24:14.401 IEEE OUI Identifier: 00 00 00 00:24:14.401 Multi-path I/O 00:24:14.401 May have multiple subsystem ports: No 00:24:14.401 May have multiple controllers: No 00:24:14.401 Associated with SR-IOV VF: No 00:24:14.401 Max Data Transfer Size: Unlimited 00:24:14.401 Max Number of Namespaces: 0 00:24:14.401 Max Number of I/O Queues: 1024 00:24:14.401 NVMe Specification Version (VS): 1.3 00:24:14.401 NVMe Specification Version (Identify): 1.3 00:24:14.401 Maximum Queue Entries: 1024 00:24:14.401 Contiguous Queues Required: No 00:24:14.401 Arbitration Mechanisms Supported 00:24:14.401 Weighted Round Robin: Not Supported 00:24:14.401 Vendor Specific: Not Supported 00:24:14.401 Reset Timeout: 7500 ms 00:24:14.401 Doorbell Stride: 4 bytes 00:24:14.401 NVM Subsystem Reset: Not Supported 00:24:14.401 Command Sets Supported 00:24:14.401 NVM Command Set: Supported 00:24:14.401 Boot Partition: Not Supported 00:24:14.401 Memory Page Size Minimum: 4096 bytes 00:24:14.401 Memory Page Size Maximum: 4096 bytes 00:24:14.401 Persistent Memory Region: Not Supported 00:24:14.401 Optional Asynchronous Events Supported 00:24:14.401 Namespace Attribute Notices: Not Supported 00:24:14.401 Firmware Activation Notices: Not Supported 00:24:14.401 ANA Change Notices: Not Supported 00:24:14.401 PLE Aggregate Log Change Notices: Not Supported 00:24:14.401 LBA Status Info Alert Notices: Not Supported 00:24:14.401 EGE Aggregate Log Change Notices: Not Supported 00:24:14.401 Normal NVM Subsystem Shutdown event: Not Supported 00:24:14.401 Zone Descriptor Change Notices: Not Supported 00:24:14.401 Discovery Log Change Notices: Supported 00:24:14.401 Controller Attributes 00:24:14.401 128-bit Host Identifier: Not Supported 00:24:14.401 Non-Operational Permissive Mode: Not Supported 00:24:14.401 NVM Sets: Not Supported 00:24:14.401 Read Recovery Levels: Not Supported 00:24:14.401 Endurance Groups: Not Supported 00:24:14.401 Predictable Latency Mode: Not Supported 00:24:14.401 Traffic Based Keep ALive: Not Supported 00:24:14.401 Namespace Granularity: Not Supported 00:24:14.401 SQ Associations: Not Supported 00:24:14.401 UUID List: Not Supported 00:24:14.401 Multi-Domain Subsystem: Not Supported 00:24:14.401 Fixed Capacity Management: Not Supported 00:24:14.401 Variable Capacity Management: Not Supported 00:24:14.401 Delete Endurance Group: Not Supported 00:24:14.401 Delete NVM Set: Not Supported 00:24:14.402 Extended LBA Formats Supported: Not Supported 00:24:14.402 Flexible Data Placement Supported: Not Supported 00:24:14.402 00:24:14.402 Controller Memory Buffer Support 00:24:14.402 ================================ 00:24:14.402 Supported: No 00:24:14.402 00:24:14.402 Persistent Memory Region Support 00:24:14.402 ================================ 00:24:14.402 Supported: No 00:24:14.402 00:24:14.402 Admin Command Set Attributes 00:24:14.402 ============================ 00:24:14.402 Security Send/Receive: Not Supported 00:24:14.402 Format NVM: Not Supported 00:24:14.402 Firmware Activate/Download: Not Supported 00:24:14.402 Namespace Management: Not Supported 00:24:14.402 Device Self-Test: Not Supported 00:24:14.402 Directives: Not Supported 00:24:14.402 NVMe-MI: Not Supported 00:24:14.402 Virtualization Management: Not Supported 00:24:14.402 Doorbell Buffer Config: Not Supported 00:24:14.402 Get LBA Status Capability: Not Supported 00:24:14.402 Command & Feature Lockdown Capability: Not Supported 00:24:14.402 Abort Command Limit: 1 00:24:14.402 Async Event Request Limit: 1 00:24:14.402 Number of Firmware Slots: N/A 00:24:14.402 Firmware Slot 1 Read-Only: N/A 00:24:14.402 Firmware Activation Without Reset: N/A 00:24:14.402 Multiple Update Detection Support: N/A 00:24:14.402 Firmware Update Granularity: No Information Provided 00:24:14.402 Per-Namespace SMART Log: No 00:24:14.402 Asymmetric Namespace Access Log Page: Not Supported 00:24:14.402 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:14.402 Command Effects Log Page: Not Supported 00:24:14.402 Get Log Page Extended Data: Supported 00:24:14.402 Telemetry Log Pages: Not Supported 00:24:14.402 Persistent Event Log Pages: Not Supported 00:24:14.402 Supported Log Pages Log Page: May Support 00:24:14.402 Commands Supported & Effects Log Page: Not Supported 00:24:14.402 Feature Identifiers & Effects Log Page:May Support 00:24:14.402 NVMe-MI Commands & Effects Log Page: May Support 00:24:14.402 Data Area 4 for Telemetry Log: Not Supported 00:24:14.402 Error Log Page Entries Supported: 1 00:24:14.402 Keep Alive: Not Supported 00:24:14.402 00:24:14.402 NVM Command Set Attributes 00:24:14.402 ========================== 00:24:14.402 Submission Queue Entry Size 00:24:14.402 Max: 1 00:24:14.402 Min: 1 00:24:14.402 Completion Queue Entry Size 00:24:14.402 Max: 1 00:24:14.402 Min: 1 00:24:14.402 Number of Namespaces: 0 00:24:14.402 Compare Command: Not Supported 00:24:14.402 Write Uncorrectable Command: Not Supported 00:24:14.402 Dataset Management Command: Not Supported 00:24:14.402 Write Zeroes Command: Not Supported 00:24:14.402 Set Features Save Field: Not Supported 00:24:14.402 Reservations: Not Supported 00:24:14.402 Timestamp: Not Supported 00:24:14.402 Copy: Not Supported 00:24:14.402 Volatile Write Cache: Not Present 00:24:14.402 Atomic Write Unit (Normal): 1 00:24:14.402 Atomic Write Unit (PFail): 1 00:24:14.402 Atomic Compare & Write Unit: 1 00:24:14.402 Fused Compare & Write: Not Supported 00:24:14.402 Scatter-Gather List 00:24:14.402 SGL Command Set: Supported 00:24:14.402 SGL Keyed: Not Supported 00:24:14.402 SGL Bit Bucket Descriptor: Not Supported 00:24:14.402 SGL Metadata Pointer: Not Supported 00:24:14.402 Oversized SGL: Not Supported 00:24:14.402 SGL Metadata Address: Not Supported 00:24:14.402 SGL Offset: Supported 00:24:14.402 Transport SGL Data Block: Not Supported 00:24:14.402 Replay Protected Memory Block: Not Supported 00:24:14.402 00:24:14.402 Firmware Slot Information 00:24:14.402 ========================= 00:24:14.402 Active slot: 0 00:24:14.402 00:24:14.402 00:24:14.402 Error Log 00:24:14.402 ========= 00:24:14.402 00:24:14.402 Active Namespaces 00:24:14.402 ================= 00:24:14.402 Discovery Log Page 00:24:14.402 ================== 00:24:14.402 Generation Counter: 2 00:24:14.402 Number of Records: 2 00:24:14.402 Record Format: 0 00:24:14.402 00:24:14.402 Discovery Log Entry 0 00:24:14.402 ---------------------- 00:24:14.402 Transport Type: 3 (TCP) 00:24:14.402 Address Family: 1 (IPv4) 00:24:14.402 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:14.402 Entry Flags: 00:24:14.402 Duplicate Returned Information: 0 00:24:14.402 Explicit Persistent Connection Support for Discovery: 0 00:24:14.402 Transport Requirements: 00:24:14.402 Secure Channel: Not Specified 00:24:14.402 Port ID: 1 (0x0001) 00:24:14.402 Controller ID: 65535 (0xffff) 00:24:14.402 Admin Max SQ Size: 32 00:24:14.402 Transport Service Identifier: 4420 00:24:14.402 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:14.402 Transport Address: 10.0.0.1 00:24:14.402 Discovery Log Entry 1 00:24:14.402 ---------------------- 00:24:14.402 Transport Type: 3 (TCP) 00:24:14.402 Address Family: 1 (IPv4) 00:24:14.402 Subsystem Type: 2 (NVM Subsystem) 00:24:14.402 Entry Flags: 00:24:14.402 Duplicate Returned Information: 0 00:24:14.402 Explicit Persistent Connection Support for Discovery: 0 00:24:14.402 Transport Requirements: 00:24:14.402 Secure Channel: Not Specified 00:24:14.402 Port ID: 1 (0x0001) 00:24:14.402 Controller ID: 65535 (0xffff) 00:24:14.402 Admin Max SQ Size: 32 00:24:14.402 Transport Service Identifier: 4420 00:24:14.402 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:14.402 Transport Address: 10.0.0.1 00:24:14.402 15:00:57 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:14.402 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.665 get_feature(0x01) failed 00:24:14.665 get_feature(0x02) failed 00:24:14.665 get_feature(0x04) failed 00:24:14.665 ===================================================== 00:24:14.665 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:14.665 ===================================================== 00:24:14.665 Controller Capabilities/Features 00:24:14.665 ================================ 00:24:14.665 Vendor ID: 0000 00:24:14.665 Subsystem Vendor ID: 0000 00:24:14.665 Serial Number: a9e75024dc1d0568e682 00:24:14.665 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:14.665 Firmware Version: 6.7.0-68 00:24:14.665 Recommended Arb Burst: 6 00:24:14.665 IEEE OUI Identifier: 00 00 00 00:24:14.665 Multi-path I/O 00:24:14.665 May have multiple subsystem ports: Yes 00:24:14.665 May have multiple controllers: Yes 00:24:14.665 Associated with SR-IOV VF: No 00:24:14.665 Max Data Transfer Size: Unlimited 00:24:14.665 Max Number of Namespaces: 1024 00:24:14.665 Max Number of I/O Queues: 128 00:24:14.665 NVMe Specification Version (VS): 1.3 00:24:14.665 NVMe Specification Version (Identify): 1.3 00:24:14.665 Maximum Queue Entries: 1024 00:24:14.665 Contiguous Queues Required: No 00:24:14.665 Arbitration Mechanisms Supported 00:24:14.665 Weighted Round Robin: Not Supported 00:24:14.665 Vendor Specific: Not Supported 00:24:14.665 Reset Timeout: 7500 ms 00:24:14.665 Doorbell Stride: 4 bytes 00:24:14.665 NVM Subsystem Reset: Not Supported 00:24:14.665 Command Sets Supported 00:24:14.665 NVM Command Set: Supported 00:24:14.665 Boot Partition: Not Supported 00:24:14.665 Memory Page Size Minimum: 4096 bytes 00:24:14.665 Memory Page Size Maximum: 4096 bytes 00:24:14.665 Persistent Memory Region: Not Supported 00:24:14.665 Optional Asynchronous Events Supported 00:24:14.665 Namespace Attribute Notices: Supported 00:24:14.665 Firmware Activation Notices: Not Supported 00:24:14.665 ANA Change Notices: Supported 00:24:14.665 PLE Aggregate Log Change Notices: Not Supported 00:24:14.665 LBA Status Info Alert Notices: Not Supported 00:24:14.665 EGE Aggregate Log Change Notices: Not Supported 00:24:14.665 Normal NVM Subsystem Shutdown event: Not Supported 00:24:14.665 Zone Descriptor Change Notices: Not Supported 00:24:14.665 Discovery Log Change Notices: Not Supported 00:24:14.665 Controller Attributes 00:24:14.665 128-bit Host Identifier: Supported 00:24:14.665 Non-Operational Permissive Mode: Not Supported 00:24:14.665 NVM Sets: Not Supported 00:24:14.665 Read Recovery Levels: Not Supported 00:24:14.665 Endurance Groups: Not Supported 00:24:14.665 Predictable Latency Mode: Not Supported 00:24:14.665 Traffic Based Keep ALive: Supported 00:24:14.665 Namespace Granularity: Not Supported 00:24:14.665 SQ Associations: Not Supported 00:24:14.665 UUID List: Not Supported 00:24:14.665 Multi-Domain Subsystem: Not Supported 00:24:14.665 Fixed Capacity Management: Not Supported 00:24:14.665 Variable Capacity Management: Not Supported 00:24:14.665 Delete Endurance Group: Not Supported 00:24:14.665 Delete NVM Set: Not Supported 00:24:14.665 Extended LBA Formats Supported: Not Supported 00:24:14.665 Flexible Data Placement Supported: Not Supported 00:24:14.665 00:24:14.665 Controller Memory Buffer Support 00:24:14.665 ================================ 00:24:14.665 Supported: No 00:24:14.665 00:24:14.665 Persistent Memory Region Support 00:24:14.665 ================================ 00:24:14.665 Supported: No 00:24:14.665 00:24:14.665 Admin Command Set Attributes 00:24:14.665 ============================ 00:24:14.665 Security Send/Receive: Not Supported 00:24:14.665 Format NVM: Not Supported 00:24:14.665 Firmware Activate/Download: Not Supported 00:24:14.665 Namespace Management: Not Supported 00:24:14.665 Device Self-Test: Not Supported 00:24:14.665 Directives: Not Supported 00:24:14.665 NVMe-MI: Not Supported 00:24:14.665 Virtualization Management: Not Supported 00:24:14.665 Doorbell Buffer Config: Not Supported 00:24:14.665 Get LBA Status Capability: Not Supported 00:24:14.665 Command & Feature Lockdown Capability: Not Supported 00:24:14.665 Abort Command Limit: 4 00:24:14.665 Async Event Request Limit: 4 00:24:14.665 Number of Firmware Slots: N/A 00:24:14.665 Firmware Slot 1 Read-Only: N/A 00:24:14.665 Firmware Activation Without Reset: N/A 00:24:14.665 Multiple Update Detection Support: N/A 00:24:14.665 Firmware Update Granularity: No Information Provided 00:24:14.665 Per-Namespace SMART Log: Yes 00:24:14.665 Asymmetric Namespace Access Log Page: Supported 00:24:14.665 ANA Transition Time : 10 sec 00:24:14.665 00:24:14.665 Asymmetric Namespace Access Capabilities 00:24:14.665 ANA Optimized State : Supported 00:24:14.665 ANA Non-Optimized State : Supported 00:24:14.665 ANA Inaccessible State : Supported 00:24:14.665 ANA Persistent Loss State : Supported 00:24:14.665 ANA Change State : Supported 00:24:14.665 ANAGRPID is not changed : No 00:24:14.665 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:14.665 00:24:14.665 ANA Group Identifier Maximum : 128 00:24:14.665 Number of ANA Group Identifiers : 128 00:24:14.665 Max Number of Allowed Namespaces : 1024 00:24:14.665 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:14.665 Command Effects Log Page: Supported 00:24:14.665 Get Log Page Extended Data: Supported 00:24:14.665 Telemetry Log Pages: Not Supported 00:24:14.665 Persistent Event Log Pages: Not Supported 00:24:14.665 Supported Log Pages Log Page: May Support 00:24:14.665 Commands Supported & Effects Log Page: Not Supported 00:24:14.665 Feature Identifiers & Effects Log Page:May Support 00:24:14.665 NVMe-MI Commands & Effects Log Page: May Support 00:24:14.665 Data Area 4 for Telemetry Log: Not Supported 00:24:14.665 Error Log Page Entries Supported: 128 00:24:14.665 Keep Alive: Supported 00:24:14.665 Keep Alive Granularity: 1000 ms 00:24:14.665 00:24:14.665 NVM Command Set Attributes 00:24:14.665 ========================== 00:24:14.665 Submission Queue Entry Size 00:24:14.665 Max: 64 00:24:14.665 Min: 64 00:24:14.665 Completion Queue Entry Size 00:24:14.665 Max: 16 00:24:14.665 Min: 16 00:24:14.665 Number of Namespaces: 1024 00:24:14.665 Compare Command: Not Supported 00:24:14.665 Write Uncorrectable Command: Not Supported 00:24:14.665 Dataset Management Command: Supported 00:24:14.665 Write Zeroes Command: Supported 00:24:14.665 Set Features Save Field: Not Supported 00:24:14.665 Reservations: Not Supported 00:24:14.665 Timestamp: Not Supported 00:24:14.665 Copy: Not Supported 00:24:14.665 Volatile Write Cache: Present 00:24:14.665 Atomic Write Unit (Normal): 1 00:24:14.665 Atomic Write Unit (PFail): 1 00:24:14.665 Atomic Compare & Write Unit: 1 00:24:14.665 Fused Compare & Write: Not Supported 00:24:14.665 Scatter-Gather List 00:24:14.665 SGL Command Set: Supported 00:24:14.665 SGL Keyed: Not Supported 00:24:14.665 SGL Bit Bucket Descriptor: Not Supported 00:24:14.665 SGL Metadata Pointer: Not Supported 00:24:14.665 Oversized SGL: Not Supported 00:24:14.665 SGL Metadata Address: Not Supported 00:24:14.665 SGL Offset: Supported 00:24:14.665 Transport SGL Data Block: Not Supported 00:24:14.665 Replay Protected Memory Block: Not Supported 00:24:14.665 00:24:14.665 Firmware Slot Information 00:24:14.665 ========================= 00:24:14.665 Active slot: 0 00:24:14.665 00:24:14.665 Asymmetric Namespace Access 00:24:14.665 =========================== 00:24:14.666 Change Count : 0 00:24:14.666 Number of ANA Group Descriptors : 1 00:24:14.666 ANA Group Descriptor : 0 00:24:14.666 ANA Group ID : 1 00:24:14.666 Number of NSID Values : 1 00:24:14.666 Change Count : 0 00:24:14.666 ANA State : 1 00:24:14.666 Namespace Identifier : 1 00:24:14.666 00:24:14.666 Commands Supported and Effects 00:24:14.666 ============================== 00:24:14.666 Admin Commands 00:24:14.666 -------------- 00:24:14.666 Get Log Page (02h): Supported 00:24:14.666 Identify (06h): Supported 00:24:14.666 Abort (08h): Supported 00:24:14.666 Set Features (09h): Supported 00:24:14.666 Get Features (0Ah): Supported 00:24:14.666 Asynchronous Event Request (0Ch): Supported 00:24:14.666 Keep Alive (18h): Supported 00:24:14.666 I/O Commands 00:24:14.666 ------------ 00:24:14.666 Flush (00h): Supported 00:24:14.666 Write (01h): Supported LBA-Change 00:24:14.666 Read (02h): Supported 00:24:14.666 Write Zeroes (08h): Supported LBA-Change 00:24:14.666 Dataset Management (09h): Supported 00:24:14.666 00:24:14.666 Error Log 00:24:14.666 ========= 00:24:14.666 Entry: 0 00:24:14.666 Error Count: 0x3 00:24:14.666 Submission Queue Id: 0x0 00:24:14.666 Command Id: 0x5 00:24:14.666 Phase Bit: 0 00:24:14.666 Status Code: 0x2 00:24:14.666 Status Code Type: 0x0 00:24:14.666 Do Not Retry: 1 00:24:14.666 Error Location: 0x28 00:24:14.666 LBA: 0x0 00:24:14.666 Namespace: 0x0 00:24:14.666 Vendor Log Page: 0x0 00:24:14.666 ----------- 00:24:14.666 Entry: 1 00:24:14.666 Error Count: 0x2 00:24:14.666 Submission Queue Id: 0x0 00:24:14.666 Command Id: 0x5 00:24:14.666 Phase Bit: 0 00:24:14.666 Status Code: 0x2 00:24:14.666 Status Code Type: 0x0 00:24:14.666 Do Not Retry: 1 00:24:14.666 Error Location: 0x28 00:24:14.666 LBA: 0x0 00:24:14.666 Namespace: 0x0 00:24:14.666 Vendor Log Page: 0x0 00:24:14.666 ----------- 00:24:14.666 Entry: 2 00:24:14.666 Error Count: 0x1 00:24:14.666 Submission Queue Id: 0x0 00:24:14.666 Command Id: 0x4 00:24:14.666 Phase Bit: 0 00:24:14.666 Status Code: 0x2 00:24:14.666 Status Code Type: 0x0 00:24:14.666 Do Not Retry: 1 00:24:14.666 Error Location: 0x28 00:24:14.666 LBA: 0x0 00:24:14.666 Namespace: 0x0 00:24:14.666 Vendor Log Page: 0x0 00:24:14.666 00:24:14.666 Number of Queues 00:24:14.666 ================ 00:24:14.666 Number of I/O Submission Queues: 128 00:24:14.666 Number of I/O Completion Queues: 128 00:24:14.666 00:24:14.666 ZNS Specific Controller Data 00:24:14.666 ============================ 00:24:14.666 Zone Append Size Limit: 0 00:24:14.666 00:24:14.666 00:24:14.666 Active Namespaces 00:24:14.666 ================= 00:24:14.666 get_feature(0x05) failed 00:24:14.666 Namespace ID:1 00:24:14.666 Command Set Identifier: NVM (00h) 00:24:14.666 Deallocate: Supported 00:24:14.666 Deallocated/Unwritten Error: Not Supported 00:24:14.666 Deallocated Read Value: Unknown 00:24:14.666 Deallocate in Write Zeroes: Not Supported 00:24:14.666 Deallocated Guard Field: 0xFFFF 00:24:14.666 Flush: Supported 00:24:14.666 Reservation: Not Supported 00:24:14.666 Namespace Sharing Capabilities: Multiple Controllers 00:24:14.666 Size (in LBAs): 3750748848 (1788GiB) 00:24:14.666 Capacity (in LBAs): 3750748848 (1788GiB) 00:24:14.666 Utilization (in LBAs): 3750748848 (1788GiB) 00:24:14.666 UUID: 31579a07-528b-4419-908a-c74c51edb9cf 00:24:14.666 Thin Provisioning: Not Supported 00:24:14.666 Per-NS Atomic Units: Yes 00:24:14.666 Atomic Write Unit (Normal): 8 00:24:14.666 Atomic Write Unit (PFail): 8 00:24:14.666 Preferred Write Granularity: 8 00:24:14.666 Atomic Compare & Write Unit: 8 00:24:14.666 Atomic Boundary Size (Normal): 0 00:24:14.666 Atomic Boundary Size (PFail): 0 00:24:14.666 Atomic Boundary Offset: 0 00:24:14.666 NGUID/EUI64 Never Reused: No 00:24:14.666 ANA group ID: 1 00:24:14.666 Namespace Write Protected: No 00:24:14.666 Number of LBA Formats: 1 00:24:14.666 Current LBA Format: LBA Format #00 00:24:14.666 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:14.666 00:24:14.666 15:00:57 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:14.666 15:00:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:14.666 15:00:57 -- nvmf/common.sh@117 -- # sync 00:24:14.666 15:00:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:14.666 15:00:57 -- nvmf/common.sh@120 -- # set +e 00:24:14.666 15:00:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:14.666 15:00:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:14.666 rmmod nvme_tcp 00:24:14.666 rmmod nvme_fabrics 00:24:14.666 15:00:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:14.666 15:00:57 -- nvmf/common.sh@124 -- # set -e 00:24:14.666 15:00:57 -- nvmf/common.sh@125 -- # return 0 00:24:14.666 15:00:57 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:24:14.666 15:00:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:14.666 15:00:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:14.666 15:00:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:14.666 15:00:57 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:14.666 15:00:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:14.666 15:00:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.666 15:00:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:14.666 15:00:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.580 15:00:59 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:16.580 15:00:59 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:16.580 15:00:59 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:16.580 15:00:59 -- nvmf/common.sh@675 -- # echo 0 00:24:16.580 15:00:59 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:16.841 15:00:59 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:16.841 15:00:59 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:16.841 15:00:59 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:16.841 15:00:59 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:24:16.841 15:00:59 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:24:16.841 15:00:59 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:20.140 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:24:20.140 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:24:20.140 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:24:20.140 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:24:20.140 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:24:20.140 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:24:20.140 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:24:20.140 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:24:20.140 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:24:20.140 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:24:20.140 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:24:20.400 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:24:20.400 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:24:20.400 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:24:20.400 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:24:20.400 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:24:22.310 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:24:22.310 00:24:22.310 real 0m20.455s 00:24:22.310 user 0m5.125s 00:24:22.310 sys 0m10.612s 00:24:22.310 15:01:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:22.310 15:01:04 -- common/autotest_common.sh@10 -- # set +x 00:24:22.310 ************************************ 00:24:22.310 END TEST nvmf_identify_kernel_target 00:24:22.310 ************************************ 00:24:22.310 15:01:04 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:22.573 15:01:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:22.573 15:01:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:22.573 15:01:04 -- common/autotest_common.sh@10 -- # set +x 00:24:22.573 ************************************ 00:24:22.573 START TEST nvmf_auth 00:24:22.573 ************************************ 00:24:22.573 15:01:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:22.573 * Looking for test storage... 00:24:22.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:22.573 15:01:05 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:22.833 15:01:05 -- nvmf/common.sh@7 -- # uname -s 00:24:22.833 15:01:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:22.833 15:01:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:22.833 15:01:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:22.833 15:01:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:22.833 15:01:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:22.833 15:01:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:22.833 15:01:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:22.833 15:01:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:22.833 15:01:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:22.833 15:01:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:22.833 15:01:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:22.833 15:01:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:22.833 15:01:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:22.833 15:01:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:22.833 15:01:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:22.833 15:01:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:22.833 15:01:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:22.833 15:01:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.833 15:01:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.833 15:01:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.833 15:01:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.833 15:01:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.833 15:01:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.833 15:01:05 -- paths/export.sh@5 -- # export PATH 00:24:22.834 15:01:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.834 15:01:05 -- nvmf/common.sh@47 -- # : 0 00:24:22.834 15:01:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:22.834 15:01:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:22.834 15:01:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:22.834 15:01:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:22.834 15:01:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:22.834 15:01:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:22.834 15:01:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:22.834 15:01:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:22.834 15:01:05 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:22.834 15:01:05 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:22.834 15:01:05 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:22.834 15:01:05 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:22.834 15:01:05 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:22.834 15:01:05 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:22.834 15:01:05 -- host/auth.sh@21 -- # keys=() 00:24:22.834 15:01:05 -- host/auth.sh@77 -- # nvmftestinit 00:24:22.834 15:01:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:22.834 15:01:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:22.834 15:01:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:22.834 15:01:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:22.834 15:01:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:22.834 15:01:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.834 15:01:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:22.834 15:01:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.834 15:01:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:22.834 15:01:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:22.834 15:01:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:22.834 15:01:05 -- common/autotest_common.sh@10 -- # set +x 00:24:30.968 15:01:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:30.968 15:01:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:30.968 15:01:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:30.968 15:01:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:30.968 15:01:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:30.968 15:01:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:30.968 15:01:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:30.968 15:01:12 -- nvmf/common.sh@295 -- # net_devs=() 00:24:30.968 15:01:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:30.968 15:01:12 -- nvmf/common.sh@296 -- # e810=() 00:24:30.968 15:01:12 -- nvmf/common.sh@296 -- # local -ga e810 00:24:30.968 15:01:12 -- nvmf/common.sh@297 -- # x722=() 00:24:30.968 15:01:12 -- nvmf/common.sh@297 -- # local -ga x722 00:24:30.968 15:01:12 -- nvmf/common.sh@298 -- # mlx=() 00:24:30.968 15:01:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:30.968 15:01:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:30.968 15:01:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:30.968 15:01:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:30.968 15:01:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:30.968 15:01:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:30.968 15:01:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:30.968 15:01:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:30.968 15:01:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:30.968 15:01:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:30.968 15:01:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:30.968 15:01:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:30.968 15:01:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:30.968 15:01:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:30.968 15:01:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:30.968 15:01:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:30.968 15:01:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:30.968 15:01:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:30.968 15:01:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:30.968 15:01:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:30.968 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:30.968 15:01:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:30.968 15:01:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:30.968 15:01:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.968 15:01:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.968 15:01:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:30.968 15:01:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:30.968 15:01:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:30.968 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:30.968 15:01:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:30.968 15:01:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:30.968 15:01:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.968 15:01:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.968 15:01:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:30.968 15:01:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:30.968 15:01:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:30.968 15:01:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:30.968 15:01:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:30.968 15:01:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.968 15:01:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:30.968 15:01:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.968 15:01:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:30.968 Found net devices under 0000:31:00.0: cvl_0_0 00:24:30.968 15:01:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.968 15:01:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:30.968 15:01:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.968 15:01:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:30.968 15:01:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.968 15:01:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:30.968 Found net devices under 0000:31:00.1: cvl_0_1 00:24:30.968 15:01:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.968 15:01:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:30.968 15:01:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:30.968 15:01:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:30.968 15:01:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:30.968 15:01:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:30.968 15:01:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:30.968 15:01:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:30.968 15:01:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:30.968 15:01:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:30.968 15:01:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:30.968 15:01:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:30.968 15:01:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:30.968 15:01:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:30.968 15:01:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:30.968 15:01:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:30.968 15:01:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:30.968 15:01:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:30.968 15:01:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:30.968 15:01:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:30.968 15:01:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:30.968 15:01:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:30.968 15:01:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:30.968 15:01:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:30.968 15:01:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:30.968 15:01:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:30.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:30.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:24:30.969 00:24:30.969 --- 10.0.0.2 ping statistics --- 00:24:30.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.969 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:24:30.969 15:01:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:30.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:30.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:24:30.969 00:24:30.969 --- 10.0.0.1 ping statistics --- 00:24:30.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.969 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:24:30.969 15:01:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:30.969 15:01:12 -- nvmf/common.sh@411 -- # return 0 00:24:30.969 15:01:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:30.969 15:01:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:30.969 15:01:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:30.969 15:01:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:30.969 15:01:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:30.969 15:01:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:30.969 15:01:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:30.969 15:01:12 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:24:30.969 15:01:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:30.969 15:01:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:30.969 15:01:12 -- common/autotest_common.sh@10 -- # set +x 00:24:30.969 15:01:12 -- nvmf/common.sh@470 -- # nvmfpid=1195278 00:24:30.969 15:01:12 -- nvmf/common.sh@471 -- # waitforlisten 1195278 00:24:30.969 15:01:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:30.969 15:01:12 -- common/autotest_common.sh@817 -- # '[' -z 1195278 ']' 00:24:30.969 15:01:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.969 15:01:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:30.969 15:01:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.969 15:01:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:30.969 15:01:12 -- common/autotest_common.sh@10 -- # set +x 00:24:30.969 15:01:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:30.969 15:01:13 -- common/autotest_common.sh@850 -- # return 0 00:24:30.969 15:01:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:30.969 15:01:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:30.969 15:01:13 -- common/autotest_common.sh@10 -- # set +x 00:24:30.969 15:01:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:30.969 15:01:13 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:30.969 15:01:13 -- host/auth.sh@81 -- # gen_key null 32 00:24:30.969 15:01:13 -- host/auth.sh@53 -- # local digest len file key 00:24:30.969 15:01:13 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:30.969 15:01:13 -- host/auth.sh@54 -- # local -A digests 00:24:30.969 15:01:13 -- host/auth.sh@56 -- # digest=null 00:24:30.969 15:01:13 -- host/auth.sh@56 -- # len=32 00:24:30.969 15:01:13 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:30.969 15:01:13 -- host/auth.sh@57 -- # key=372486267c732402ab2d7f0aae1e7ac2 00:24:30.969 15:01:13 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:24:30.969 15:01:13 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.APu 00:24:30.969 15:01:13 -- host/auth.sh@59 -- # format_dhchap_key 372486267c732402ab2d7f0aae1e7ac2 0 00:24:30.969 15:01:13 -- nvmf/common.sh@708 -- # format_key DHHC-1 372486267c732402ab2d7f0aae1e7ac2 0 00:24:30.969 15:01:13 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:30.969 15:01:13 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:30.969 15:01:13 -- nvmf/common.sh@693 -- # key=372486267c732402ab2d7f0aae1e7ac2 00:24:30.969 15:01:13 -- nvmf/common.sh@693 -- # digest=0 00:24:30.969 15:01:13 -- nvmf/common.sh@694 -- # python - 00:24:30.969 15:01:13 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.APu 00:24:30.969 15:01:13 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.APu 00:24:30.969 15:01:13 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.APu 00:24:30.969 15:01:13 -- host/auth.sh@82 -- # gen_key null 48 00:24:30.969 15:01:13 -- host/auth.sh@53 -- # local digest len file key 00:24:30.969 15:01:13 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:30.969 15:01:13 -- host/auth.sh@54 -- # local -A digests 00:24:30.969 15:01:13 -- host/auth.sh@56 -- # digest=null 00:24:30.969 15:01:13 -- host/auth.sh@56 -- # len=48 00:24:30.969 15:01:13 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:30.969 15:01:13 -- host/auth.sh@57 -- # key=fa21e0d3bf2b8cc4684db979a4c5e655448f9446bd5b7f90 00:24:30.969 15:01:13 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:24:30.969 15:01:13 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.XBG 00:24:30.969 15:01:13 -- host/auth.sh@59 -- # format_dhchap_key fa21e0d3bf2b8cc4684db979a4c5e655448f9446bd5b7f90 0 00:24:30.969 15:01:13 -- nvmf/common.sh@708 -- # format_key DHHC-1 fa21e0d3bf2b8cc4684db979a4c5e655448f9446bd5b7f90 0 00:24:30.969 15:01:13 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:30.969 15:01:13 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:30.969 15:01:13 -- nvmf/common.sh@693 -- # key=fa21e0d3bf2b8cc4684db979a4c5e655448f9446bd5b7f90 00:24:30.969 15:01:13 -- nvmf/common.sh@693 -- # digest=0 00:24:30.969 15:01:13 -- nvmf/common.sh@694 -- # python - 00:24:30.969 15:01:13 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.XBG 00:24:30.969 15:01:13 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.XBG 00:24:30.969 15:01:13 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.XBG 00:24:30.969 15:01:13 -- host/auth.sh@83 -- # gen_key sha256 32 00:24:30.969 15:01:13 -- host/auth.sh@53 -- # local digest len file key 00:24:30.969 15:01:13 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:30.969 15:01:13 -- host/auth.sh@54 -- # local -A digests 00:24:30.969 15:01:13 -- host/auth.sh@56 -- # digest=sha256 00:24:30.969 15:01:13 -- host/auth.sh@56 -- # len=32 00:24:30.969 15:01:13 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:30.969 15:01:13 -- host/auth.sh@57 -- # key=b8cde3046227df7390151092bc6bad3d 00:24:30.969 15:01:13 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:24:30.969 15:01:13 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.eRY 00:24:30.969 15:01:13 -- host/auth.sh@59 -- # format_dhchap_key b8cde3046227df7390151092bc6bad3d 1 00:24:30.969 15:01:13 -- nvmf/common.sh@708 -- # format_key DHHC-1 b8cde3046227df7390151092bc6bad3d 1 00:24:30.969 15:01:13 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:30.969 15:01:13 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:30.969 15:01:13 -- nvmf/common.sh@693 -- # key=b8cde3046227df7390151092bc6bad3d 00:24:30.969 15:01:13 -- nvmf/common.sh@693 -- # digest=1 00:24:30.969 15:01:13 -- nvmf/common.sh@694 -- # python - 00:24:30.969 15:01:13 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.eRY 00:24:30.969 15:01:13 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.eRY 00:24:30.969 15:01:13 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.eRY 00:24:30.969 15:01:13 -- host/auth.sh@84 -- # gen_key sha384 48 00:24:30.969 15:01:13 -- host/auth.sh@53 -- # local digest len file key 00:24:30.969 15:01:13 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:30.969 15:01:13 -- host/auth.sh@54 -- # local -A digests 00:24:30.969 15:01:13 -- host/auth.sh@56 -- # digest=sha384 00:24:30.969 15:01:13 -- host/auth.sh@56 -- # len=48 00:24:30.969 15:01:13 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:30.969 15:01:13 -- host/auth.sh@57 -- # key=8743fd13910acac4300f873c7bad490e4724d45e2dc957c7 00:24:30.969 15:01:13 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:24:30.969 15:01:13 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.uAM 00:24:30.969 15:01:13 -- host/auth.sh@59 -- # format_dhchap_key 8743fd13910acac4300f873c7bad490e4724d45e2dc957c7 2 00:24:30.969 15:01:13 -- nvmf/common.sh@708 -- # format_key DHHC-1 8743fd13910acac4300f873c7bad490e4724d45e2dc957c7 2 00:24:30.969 15:01:13 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:30.969 15:01:13 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:30.969 15:01:13 -- nvmf/common.sh@693 -- # key=8743fd13910acac4300f873c7bad490e4724d45e2dc957c7 00:24:30.970 15:01:13 -- nvmf/common.sh@693 -- # digest=2 00:24:30.970 15:01:13 -- nvmf/common.sh@694 -- # python - 00:24:30.970 15:01:13 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.uAM 00:24:30.970 15:01:13 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.uAM 00:24:30.970 15:01:13 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.uAM 00:24:30.970 15:01:13 -- host/auth.sh@85 -- # gen_key sha512 64 00:24:30.970 15:01:13 -- host/auth.sh@53 -- # local digest len file key 00:24:30.970 15:01:13 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:30.970 15:01:13 -- host/auth.sh@54 -- # local -A digests 00:24:30.970 15:01:13 -- host/auth.sh@56 -- # digest=sha512 00:24:30.970 15:01:13 -- host/auth.sh@56 -- # len=64 00:24:30.970 15:01:13 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:30.970 15:01:13 -- host/auth.sh@57 -- # key=75b6337e1eb723e26e39d5aa9331d284dd6e735d5031ee5f91122ab45692bcd3 00:24:30.970 15:01:13 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:24:31.231 15:01:13 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.YBx 00:24:31.231 15:01:13 -- host/auth.sh@59 -- # format_dhchap_key 75b6337e1eb723e26e39d5aa9331d284dd6e735d5031ee5f91122ab45692bcd3 3 00:24:31.231 15:01:13 -- nvmf/common.sh@708 -- # format_key DHHC-1 75b6337e1eb723e26e39d5aa9331d284dd6e735d5031ee5f91122ab45692bcd3 3 00:24:31.231 15:01:13 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:31.231 15:01:13 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:31.231 15:01:13 -- nvmf/common.sh@693 -- # key=75b6337e1eb723e26e39d5aa9331d284dd6e735d5031ee5f91122ab45692bcd3 00:24:31.231 15:01:13 -- nvmf/common.sh@693 -- # digest=3 00:24:31.231 15:01:13 -- nvmf/common.sh@694 -- # python - 00:24:31.231 15:01:13 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.YBx 00:24:31.231 15:01:13 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.YBx 00:24:31.231 15:01:13 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.YBx 00:24:31.231 15:01:13 -- host/auth.sh@87 -- # waitforlisten 1195278 00:24:31.231 15:01:13 -- common/autotest_common.sh@817 -- # '[' -z 1195278 ']' 00:24:31.231 15:01:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.231 15:01:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:31.231 15:01:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.231 15:01:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:31.231 15:01:13 -- common/autotest_common.sh@10 -- # set +x 00:24:31.231 15:01:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:31.231 15:01:13 -- common/autotest_common.sh@850 -- # return 0 00:24:31.231 15:01:13 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:31.231 15:01:13 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.APu 00:24:31.231 15:01:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.231 15:01:13 -- common/autotest_common.sh@10 -- # set +x 00:24:31.231 15:01:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.231 15:01:13 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:31.231 15:01:13 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.XBG 00:24:31.231 15:01:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.231 15:01:13 -- common/autotest_common.sh@10 -- # set +x 00:24:31.231 15:01:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.231 15:01:13 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:31.231 15:01:13 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.eRY 00:24:31.231 15:01:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.231 15:01:13 -- common/autotest_common.sh@10 -- # set +x 00:24:31.231 15:01:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.231 15:01:13 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:31.231 15:01:13 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.uAM 00:24:31.231 15:01:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.231 15:01:13 -- common/autotest_common.sh@10 -- # set +x 00:24:31.231 15:01:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.231 15:01:13 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:31.231 15:01:13 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.YBx 00:24:31.231 15:01:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.231 15:01:13 -- common/autotest_common.sh@10 -- # set +x 00:24:31.231 15:01:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.231 15:01:13 -- host/auth.sh@92 -- # nvmet_auth_init 00:24:31.490 15:01:13 -- host/auth.sh@35 -- # get_main_ns_ip 00:24:31.490 15:01:13 -- nvmf/common.sh@717 -- # local ip 00:24:31.490 15:01:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:31.490 15:01:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:31.490 15:01:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.490 15:01:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.490 15:01:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:31.490 15:01:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.490 15:01:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:31.490 15:01:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:31.490 15:01:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:31.490 15:01:13 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:31.490 15:01:13 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:31.490 15:01:13 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:24:31.490 15:01:13 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:31.490 15:01:13 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:31.490 15:01:13 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:31.490 15:01:13 -- nvmf/common.sh@628 -- # local block nvme 00:24:31.490 15:01:13 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:24:31.490 15:01:13 -- nvmf/common.sh@631 -- # modprobe nvmet 00:24:31.490 15:01:13 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:31.490 15:01:13 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:34.785 Waiting for block devices as requested 00:24:34.785 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:24:34.785 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:24:34.785 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:24:35.044 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:24:35.044 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:24:35.044 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:24:35.304 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:24:35.304 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:24:35.304 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:24:35.564 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:24:35.564 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:24:35.564 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:24:35.824 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:24:35.824 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:24:35.824 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:24:36.084 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:24:36.084 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:24:37.032 15:01:19 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:37.032 15:01:19 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:37.032 15:01:19 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:24:37.032 15:01:19 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:37.032 15:01:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:37.032 15:01:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:37.032 15:01:19 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:24:37.032 15:01:19 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:37.032 15:01:19 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:37.032 No valid GPT data, bailing 00:24:37.032 15:01:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:37.032 15:01:19 -- scripts/common.sh@391 -- # pt= 00:24:37.032 15:01:19 -- scripts/common.sh@392 -- # return 1 00:24:37.032 15:01:19 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:24:37.032 15:01:19 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:24:37.032 15:01:19 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:37.032 15:01:19 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:37.032 15:01:19 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:37.032 15:01:19 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:37.032 15:01:19 -- nvmf/common.sh@656 -- # echo 1 00:24:37.032 15:01:19 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:24:37.032 15:01:19 -- nvmf/common.sh@658 -- # echo 1 00:24:37.032 15:01:19 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:24:37.032 15:01:19 -- nvmf/common.sh@661 -- # echo tcp 00:24:37.032 15:01:19 -- nvmf/common.sh@662 -- # echo 4420 00:24:37.032 15:01:19 -- nvmf/common.sh@663 -- # echo ipv4 00:24:37.032 15:01:19 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:37.032 15:01:19 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:24:37.032 00:24:37.032 Discovery Log Number of Records 2, Generation counter 2 00:24:37.032 =====Discovery Log Entry 0====== 00:24:37.032 trtype: tcp 00:24:37.032 adrfam: ipv4 00:24:37.032 subtype: current discovery subsystem 00:24:37.032 treq: not specified, sq flow control disable supported 00:24:37.032 portid: 1 00:24:37.032 trsvcid: 4420 00:24:37.032 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:37.032 traddr: 10.0.0.1 00:24:37.032 eflags: none 00:24:37.032 sectype: none 00:24:37.032 =====Discovery Log Entry 1====== 00:24:37.032 trtype: tcp 00:24:37.032 adrfam: ipv4 00:24:37.032 subtype: nvme subsystem 00:24:37.032 treq: not specified, sq flow control disable supported 00:24:37.032 portid: 1 00:24:37.032 trsvcid: 4420 00:24:37.032 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:37.032 traddr: 10.0.0.1 00:24:37.032 eflags: none 00:24:37.032 sectype: none 00:24:37.032 15:01:19 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:37.032 15:01:19 -- host/auth.sh@37 -- # echo 0 00:24:37.032 15:01:19 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:37.032 15:01:19 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:37.032 15:01:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:37.032 15:01:19 -- host/auth.sh@44 -- # digest=sha256 00:24:37.032 15:01:19 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:37.032 15:01:19 -- host/auth.sh@44 -- # keyid=1 00:24:37.032 15:01:19 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:24:37.032 15:01:19 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:37.032 15:01:19 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:37.032 15:01:19 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:24:37.032 15:01:19 -- host/auth.sh@100 -- # IFS=, 00:24:37.032 15:01:19 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:24:37.032 15:01:19 -- host/auth.sh@100 -- # IFS=, 00:24:37.032 15:01:19 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:37.033 15:01:19 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:37.033 15:01:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:37.033 15:01:19 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:24:37.033 15:01:19 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:37.033 15:01:19 -- host/auth.sh@68 -- # keyid=1 00:24:37.033 15:01:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:37.033 15:01:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.033 15:01:19 -- common/autotest_common.sh@10 -- # set +x 00:24:37.033 15:01:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.033 15:01:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:37.033 15:01:19 -- nvmf/common.sh@717 -- # local ip 00:24:37.033 15:01:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:37.033 15:01:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:37.033 15:01:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.033 15:01:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.033 15:01:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:37.033 15:01:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.033 15:01:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:37.033 15:01:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:37.033 15:01:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:37.033 15:01:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:37.033 15:01:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.033 15:01:19 -- common/autotest_common.sh@10 -- # set +x 00:24:37.295 nvme0n1 00:24:37.295 15:01:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.295 15:01:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.295 15:01:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.295 15:01:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:37.295 15:01:19 -- common/autotest_common.sh@10 -- # set +x 00:24:37.295 15:01:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.295 15:01:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.295 15:01:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.295 15:01:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.295 15:01:19 -- common/autotest_common.sh@10 -- # set +x 00:24:37.295 15:01:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.295 15:01:19 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:24:37.295 15:01:19 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:37.295 15:01:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:37.295 15:01:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:37.295 15:01:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:37.295 15:01:19 -- host/auth.sh@44 -- # digest=sha256 00:24:37.296 15:01:19 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:37.296 15:01:19 -- host/auth.sh@44 -- # keyid=0 00:24:37.296 15:01:19 -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:24:37.296 15:01:19 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:37.296 15:01:19 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:37.296 15:01:19 -- host/auth.sh@49 -- # echo DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:24:37.296 15:01:19 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:24:37.296 15:01:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:37.296 15:01:19 -- host/auth.sh@68 -- # digest=sha256 00:24:37.296 15:01:19 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:37.296 15:01:19 -- host/auth.sh@68 -- # keyid=0 00:24:37.296 15:01:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:37.296 15:01:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.296 15:01:19 -- common/autotest_common.sh@10 -- # set +x 00:24:37.296 15:01:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.296 15:01:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:37.296 15:01:19 -- nvmf/common.sh@717 -- # local ip 00:24:37.296 15:01:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:37.296 15:01:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:37.296 15:01:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.296 15:01:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.296 15:01:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:37.296 15:01:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.296 15:01:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:37.296 15:01:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:37.296 15:01:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:37.296 15:01:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:37.296 15:01:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.296 15:01:19 -- common/autotest_common.sh@10 -- # set +x 00:24:37.559 nvme0n1 00:24:37.559 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.559 15:01:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.559 15:01:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:37.559 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.559 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:37.559 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.559 15:01:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.559 15:01:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.559 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.559 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:37.559 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.559 15:01:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:37.559 15:01:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:37.559 15:01:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:37.559 15:01:20 -- host/auth.sh@44 -- # digest=sha256 00:24:37.559 15:01:20 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:37.559 15:01:20 -- host/auth.sh@44 -- # keyid=1 00:24:37.559 15:01:20 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:24:37.559 15:01:20 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:37.559 15:01:20 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:37.559 15:01:20 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:24:37.559 15:01:20 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:24:37.559 15:01:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:37.559 15:01:20 -- host/auth.sh@68 -- # digest=sha256 00:24:37.559 15:01:20 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:37.559 15:01:20 -- host/auth.sh@68 -- # keyid=1 00:24:37.559 15:01:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:37.559 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.559 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:37.559 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.559 15:01:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:37.559 15:01:20 -- nvmf/common.sh@717 -- # local ip 00:24:37.559 15:01:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:37.559 15:01:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:37.559 15:01:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.559 15:01:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.559 15:01:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:37.559 15:01:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.559 15:01:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:37.559 15:01:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:37.559 15:01:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:37.559 15:01:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:37.559 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.559 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:37.882 nvme0n1 00:24:37.882 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.882 15:01:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.882 15:01:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:37.882 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.882 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:37.882 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.882 15:01:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.882 15:01:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.882 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.882 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:37.882 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.882 15:01:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:37.882 15:01:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:37.882 15:01:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:37.882 15:01:20 -- host/auth.sh@44 -- # digest=sha256 00:24:37.882 15:01:20 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:37.882 15:01:20 -- host/auth.sh@44 -- # keyid=2 00:24:37.882 15:01:20 -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:24:37.882 15:01:20 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:37.882 15:01:20 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:37.882 15:01:20 -- host/auth.sh@49 -- # echo DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:24:37.882 15:01:20 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:24:37.882 15:01:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:37.882 15:01:20 -- host/auth.sh@68 -- # digest=sha256 00:24:37.882 15:01:20 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:37.882 15:01:20 -- host/auth.sh@68 -- # keyid=2 00:24:37.882 15:01:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:37.882 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.882 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:37.882 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.882 15:01:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:37.882 15:01:20 -- nvmf/common.sh@717 -- # local ip 00:24:37.882 15:01:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:37.882 15:01:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:37.882 15:01:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.882 15:01:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.882 15:01:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:37.882 15:01:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.882 15:01:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:37.882 15:01:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:37.882 15:01:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:37.882 15:01:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:37.882 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.882 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:37.883 nvme0n1 00:24:37.883 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.883 15:01:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.883 15:01:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:37.883 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.883 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:37.883 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.146 15:01:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.146 15:01:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.146 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.146 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:38.146 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.146 15:01:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:38.146 15:01:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:38.146 15:01:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:38.146 15:01:20 -- host/auth.sh@44 -- # digest=sha256 00:24:38.146 15:01:20 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:38.146 15:01:20 -- host/auth.sh@44 -- # keyid=3 00:24:38.146 15:01:20 -- host/auth.sh@45 -- # key=DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:24:38.146 15:01:20 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:38.146 15:01:20 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:38.146 15:01:20 -- host/auth.sh@49 -- # echo DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:24:38.146 15:01:20 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:24:38.146 15:01:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:38.146 15:01:20 -- host/auth.sh@68 -- # digest=sha256 00:24:38.146 15:01:20 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:38.146 15:01:20 -- host/auth.sh@68 -- # keyid=3 00:24:38.146 15:01:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:38.146 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.146 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:38.146 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.146 15:01:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:38.146 15:01:20 -- nvmf/common.sh@717 -- # local ip 00:24:38.146 15:01:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:38.146 15:01:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:38.146 15:01:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.146 15:01:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.146 15:01:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:38.146 15:01:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.146 15:01:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:38.146 15:01:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:38.146 15:01:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:38.146 15:01:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:38.146 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.146 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:38.146 nvme0n1 00:24:38.146 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.146 15:01:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.146 15:01:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:38.146 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.146 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:38.146 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.146 15:01:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.146 15:01:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.146 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.146 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:38.146 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.146 15:01:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:38.146 15:01:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:38.146 15:01:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:38.146 15:01:20 -- host/auth.sh@44 -- # digest=sha256 00:24:38.146 15:01:20 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:38.146 15:01:20 -- host/auth.sh@44 -- # keyid=4 00:24:38.146 15:01:20 -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:24:38.146 15:01:20 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:38.146 15:01:20 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:38.146 15:01:20 -- host/auth.sh@49 -- # echo DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:24:38.146 15:01:20 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:24:38.146 15:01:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:38.146 15:01:20 -- host/auth.sh@68 -- # digest=sha256 00:24:38.146 15:01:20 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:38.146 15:01:20 -- host/auth.sh@68 -- # keyid=4 00:24:38.146 15:01:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:38.146 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.146 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:38.146 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.146 15:01:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:38.146 15:01:20 -- nvmf/common.sh@717 -- # local ip 00:24:38.146 15:01:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:38.147 15:01:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:38.147 15:01:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.147 15:01:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.147 15:01:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:38.147 15:01:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.147 15:01:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:38.147 15:01:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:38.147 15:01:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:38.147 15:01:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:38.147 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.147 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:38.406 nvme0n1 00:24:38.406 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.406 15:01:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.406 15:01:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:38.406 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.406 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:38.406 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.406 15:01:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.406 15:01:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.406 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.406 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:38.406 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.406 15:01:21 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:38.406 15:01:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:38.406 15:01:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:38.406 15:01:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:38.406 15:01:21 -- host/auth.sh@44 -- # digest=sha256 00:24:38.406 15:01:21 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:38.406 15:01:21 -- host/auth.sh@44 -- # keyid=0 00:24:38.406 15:01:21 -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:24:38.406 15:01:21 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:38.406 15:01:21 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:38.406 15:01:21 -- host/auth.sh@49 -- # echo DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:24:38.406 15:01:21 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:24:38.406 15:01:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:38.407 15:01:21 -- host/auth.sh@68 -- # digest=sha256 00:24:38.407 15:01:21 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:38.407 15:01:21 -- host/auth.sh@68 -- # keyid=0 00:24:38.407 15:01:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:38.407 15:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.407 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:24:38.407 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.407 15:01:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:38.407 15:01:21 -- nvmf/common.sh@717 -- # local ip 00:24:38.407 15:01:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:38.407 15:01:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:38.407 15:01:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.407 15:01:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.407 15:01:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:38.407 15:01:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.407 15:01:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:38.407 15:01:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:38.407 15:01:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:38.407 15:01:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:38.407 15:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.407 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:24:38.666 nvme0n1 00:24:38.666 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.666 15:01:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.666 15:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.666 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:24:38.666 15:01:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:38.666 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.666 15:01:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.666 15:01:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.666 15:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.666 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:24:38.666 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.666 15:01:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:38.666 15:01:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:38.666 15:01:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:38.666 15:01:21 -- host/auth.sh@44 -- # digest=sha256 00:24:38.666 15:01:21 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:38.666 15:01:21 -- host/auth.sh@44 -- # keyid=1 00:24:38.666 15:01:21 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:24:38.666 15:01:21 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:38.666 15:01:21 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:38.666 15:01:21 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:24:38.666 15:01:21 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:24:38.666 15:01:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:38.666 15:01:21 -- host/auth.sh@68 -- # digest=sha256 00:24:38.666 15:01:21 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:38.666 15:01:21 -- host/auth.sh@68 -- # keyid=1 00:24:38.666 15:01:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:38.666 15:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.666 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:24:38.666 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.666 15:01:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:38.666 15:01:21 -- nvmf/common.sh@717 -- # local ip 00:24:38.666 15:01:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:38.666 15:01:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:38.666 15:01:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.666 15:01:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.666 15:01:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:38.666 15:01:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.666 15:01:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:38.666 15:01:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:38.666 15:01:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:38.666 15:01:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:38.666 15:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.666 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:24:38.925 nvme0n1 00:24:38.925 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.925 15:01:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.925 15:01:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:38.925 15:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.925 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:24:38.925 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.925 15:01:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.925 15:01:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.925 15:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.925 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:24:38.925 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.925 15:01:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:38.925 15:01:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:38.925 15:01:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:38.925 15:01:21 -- host/auth.sh@44 -- # digest=sha256 00:24:38.925 15:01:21 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:38.925 15:01:21 -- host/auth.sh@44 -- # keyid=2 00:24:38.925 15:01:21 -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:24:38.925 15:01:21 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:38.925 15:01:21 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:38.925 15:01:21 -- host/auth.sh@49 -- # echo DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:24:38.925 15:01:21 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:24:38.925 15:01:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:38.925 15:01:21 -- host/auth.sh@68 -- # digest=sha256 00:24:38.925 15:01:21 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:38.925 15:01:21 -- host/auth.sh@68 -- # keyid=2 00:24:38.925 15:01:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:38.925 15:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.925 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:24:38.925 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.925 15:01:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:38.925 15:01:21 -- nvmf/common.sh@717 -- # local ip 00:24:38.925 15:01:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:38.925 15:01:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:38.925 15:01:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.925 15:01:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.925 15:01:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:38.926 15:01:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.926 15:01:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:38.926 15:01:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:38.926 15:01:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:38.926 15:01:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:38.926 15:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.926 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:24:39.185 nvme0n1 00:24:39.185 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.185 15:01:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.185 15:01:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:39.185 15:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.185 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:24:39.185 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.185 15:01:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.185 15:01:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.185 15:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.185 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:24:39.185 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.185 15:01:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:39.185 15:01:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:39.185 15:01:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:39.185 15:01:21 -- host/auth.sh@44 -- # digest=sha256 00:24:39.185 15:01:21 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:39.185 15:01:21 -- host/auth.sh@44 -- # keyid=3 00:24:39.185 15:01:21 -- host/auth.sh@45 -- # key=DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:24:39.185 15:01:21 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:39.185 15:01:21 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:39.185 15:01:21 -- host/auth.sh@49 -- # echo DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:24:39.185 15:01:21 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:24:39.185 15:01:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:39.185 15:01:21 -- host/auth.sh@68 -- # digest=sha256 00:24:39.185 15:01:21 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:39.185 15:01:21 -- host/auth.sh@68 -- # keyid=3 00:24:39.185 15:01:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:39.185 15:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.185 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:24:39.185 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.185 15:01:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:39.185 15:01:21 -- nvmf/common.sh@717 -- # local ip 00:24:39.185 15:01:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:39.185 15:01:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:39.185 15:01:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.185 15:01:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.185 15:01:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:39.185 15:01:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.185 15:01:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:39.185 15:01:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:39.185 15:01:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:39.185 15:01:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:39.185 15:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.185 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:24:39.444 nvme0n1 00:24:39.444 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.444 15:01:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.444 15:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.444 15:01:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:39.444 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:24:39.444 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.444 15:01:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.444 15:01:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.444 15:01:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.444 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:24:39.444 15:01:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.444 15:01:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:39.444 15:01:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:39.444 15:01:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:39.444 15:01:22 -- host/auth.sh@44 -- # digest=sha256 00:24:39.444 15:01:22 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:39.444 15:01:22 -- host/auth.sh@44 -- # keyid=4 00:24:39.444 15:01:22 -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:24:39.444 15:01:22 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:39.444 15:01:22 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:39.444 15:01:22 -- host/auth.sh@49 -- # echo DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:24:39.444 15:01:22 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:24:39.444 15:01:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:39.444 15:01:22 -- host/auth.sh@68 -- # digest=sha256 00:24:39.444 15:01:22 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:39.444 15:01:22 -- host/auth.sh@68 -- # keyid=4 00:24:39.444 15:01:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:39.444 15:01:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.445 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:24:39.445 15:01:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.445 15:01:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:39.445 15:01:22 -- nvmf/common.sh@717 -- # local ip 00:24:39.445 15:01:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:39.445 15:01:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:39.445 15:01:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.445 15:01:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.445 15:01:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:39.445 15:01:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.445 15:01:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:39.445 15:01:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:39.445 15:01:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:39.445 15:01:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:39.445 15:01:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.445 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:24:39.704 nvme0n1 00:24:39.704 15:01:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.704 15:01:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.704 15:01:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:39.704 15:01:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.704 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:24:39.704 15:01:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.704 15:01:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.704 15:01:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.704 15:01:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.704 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:24:39.704 15:01:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.704 15:01:22 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:39.704 15:01:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:39.704 15:01:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:39.704 15:01:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:39.704 15:01:22 -- host/auth.sh@44 -- # digest=sha256 00:24:39.704 15:01:22 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:39.704 15:01:22 -- host/auth.sh@44 -- # keyid=0 00:24:39.704 15:01:22 -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:24:39.704 15:01:22 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:39.704 15:01:22 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:39.704 15:01:22 -- host/auth.sh@49 -- # echo DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:24:39.704 15:01:22 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:24:39.704 15:01:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:39.704 15:01:22 -- host/auth.sh@68 -- # digest=sha256 00:24:39.704 15:01:22 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:39.704 15:01:22 -- host/auth.sh@68 -- # keyid=0 00:24:39.704 15:01:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:39.704 15:01:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.704 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:24:39.704 15:01:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.704 15:01:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:39.704 15:01:22 -- nvmf/common.sh@717 -- # local ip 00:24:39.704 15:01:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:39.704 15:01:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:39.704 15:01:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.704 15:01:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.704 15:01:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:39.704 15:01:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.704 15:01:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:39.704 15:01:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:39.704 15:01:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:39.705 15:01:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:39.705 15:01:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.705 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:24:39.963 nvme0n1 00:24:39.963 15:01:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.963 15:01:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.963 15:01:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:39.963 15:01:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.963 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:24:39.963 15:01:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.222 15:01:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.222 15:01:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.222 15:01:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.222 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:24:40.222 15:01:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.222 15:01:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:40.222 15:01:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:40.222 15:01:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:40.222 15:01:22 -- host/auth.sh@44 -- # digest=sha256 00:24:40.222 15:01:22 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:40.222 15:01:22 -- host/auth.sh@44 -- # keyid=1 00:24:40.222 15:01:22 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:24:40.222 15:01:22 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:40.222 15:01:22 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:40.222 15:01:22 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:24:40.222 15:01:22 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:24:40.222 15:01:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:40.222 15:01:22 -- host/auth.sh@68 -- # digest=sha256 00:24:40.222 15:01:22 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:40.222 15:01:22 -- host/auth.sh@68 -- # keyid=1 00:24:40.222 15:01:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:40.222 15:01:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.222 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:24:40.222 15:01:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.222 15:01:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:40.222 15:01:22 -- nvmf/common.sh@717 -- # local ip 00:24:40.222 15:01:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:40.222 15:01:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:40.222 15:01:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.222 15:01:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.222 15:01:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:40.222 15:01:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.222 15:01:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:40.222 15:01:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:40.222 15:01:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:40.222 15:01:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:40.222 15:01:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.222 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:24:40.482 nvme0n1 00:24:40.482 15:01:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.482 15:01:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.482 15:01:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:40.482 15:01:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.482 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:24:40.482 15:01:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.482 15:01:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.482 15:01:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.482 15:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.482 15:01:23 -- common/autotest_common.sh@10 -- # set +x 00:24:40.482 15:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.482 15:01:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:40.482 15:01:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:40.482 15:01:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:40.482 15:01:23 -- host/auth.sh@44 -- # digest=sha256 00:24:40.482 15:01:23 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:40.482 15:01:23 -- host/auth.sh@44 -- # keyid=2 00:24:40.482 15:01:23 -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:24:40.482 15:01:23 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:40.482 15:01:23 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:40.482 15:01:23 -- host/auth.sh@49 -- # echo DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:24:40.482 15:01:23 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:24:40.482 15:01:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:40.482 15:01:23 -- host/auth.sh@68 -- # digest=sha256 00:24:40.482 15:01:23 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:40.482 15:01:23 -- host/auth.sh@68 -- # keyid=2 00:24:40.482 15:01:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:40.482 15:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.482 15:01:23 -- common/autotest_common.sh@10 -- # set +x 00:24:40.482 15:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.482 15:01:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:40.482 15:01:23 -- nvmf/common.sh@717 -- # local ip 00:24:40.482 15:01:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:40.482 15:01:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:40.482 15:01:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.482 15:01:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.482 15:01:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:40.482 15:01:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.482 15:01:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:40.482 15:01:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:40.482 15:01:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:40.482 15:01:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:40.482 15:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.482 15:01:23 -- common/autotest_common.sh@10 -- # set +x 00:24:40.742 nvme0n1 00:24:40.742 15:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.742 15:01:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.742 15:01:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:40.742 15:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.742 15:01:23 -- common/autotest_common.sh@10 -- # set +x 00:24:40.742 15:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.742 15:01:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.742 15:01:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.742 15:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.742 15:01:23 -- common/autotest_common.sh@10 -- # set +x 00:24:40.742 15:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.742 15:01:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:40.742 15:01:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:40.742 15:01:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:40.742 15:01:23 -- host/auth.sh@44 -- # digest=sha256 00:24:40.742 15:01:23 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:40.742 15:01:23 -- host/auth.sh@44 -- # keyid=3 00:24:40.742 15:01:23 -- host/auth.sh@45 -- # key=DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:24:40.742 15:01:23 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:40.742 15:01:23 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:40.742 15:01:23 -- host/auth.sh@49 -- # echo DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:24:40.742 15:01:23 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:24:40.742 15:01:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:40.742 15:01:23 -- host/auth.sh@68 -- # digest=sha256 00:24:40.742 15:01:23 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:40.742 15:01:23 -- host/auth.sh@68 -- # keyid=3 00:24:40.742 15:01:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:40.742 15:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.742 15:01:23 -- common/autotest_common.sh@10 -- # set +x 00:24:40.742 15:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.742 15:01:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:40.742 15:01:23 -- nvmf/common.sh@717 -- # local ip 00:24:40.742 15:01:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:40.742 15:01:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:40.742 15:01:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.742 15:01:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.742 15:01:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:40.742 15:01:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.742 15:01:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:40.742 15:01:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:40.742 15:01:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:40.742 15:01:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:40.743 15:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.743 15:01:23 -- common/autotest_common.sh@10 -- # set +x 00:24:41.002 nvme0n1 00:24:41.263 15:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.263 15:01:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.263 15:01:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:41.263 15:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.263 15:01:23 -- common/autotest_common.sh@10 -- # set +x 00:24:41.263 15:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.263 15:01:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.263 15:01:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.263 15:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.263 15:01:23 -- common/autotest_common.sh@10 -- # set +x 00:24:41.263 15:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.263 15:01:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:41.263 15:01:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:41.263 15:01:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:41.263 15:01:23 -- host/auth.sh@44 -- # digest=sha256 00:24:41.263 15:01:23 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:41.263 15:01:23 -- host/auth.sh@44 -- # keyid=4 00:24:41.263 15:01:23 -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:24:41.263 15:01:23 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:41.263 15:01:23 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:41.263 15:01:23 -- host/auth.sh@49 -- # echo DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:24:41.263 15:01:23 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:24:41.263 15:01:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:41.263 15:01:23 -- host/auth.sh@68 -- # digest=sha256 00:24:41.263 15:01:23 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:41.263 15:01:23 -- host/auth.sh@68 -- # keyid=4 00:24:41.263 15:01:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:41.263 15:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.263 15:01:23 -- common/autotest_common.sh@10 -- # set +x 00:24:41.263 15:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.263 15:01:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:41.263 15:01:23 -- nvmf/common.sh@717 -- # local ip 00:24:41.263 15:01:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:41.263 15:01:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:41.263 15:01:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.263 15:01:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.263 15:01:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:41.263 15:01:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.263 15:01:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:41.263 15:01:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:41.263 15:01:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:41.263 15:01:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:41.263 15:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.263 15:01:23 -- common/autotest_common.sh@10 -- # set +x 00:24:41.522 nvme0n1 00:24:41.522 15:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.522 15:01:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.522 15:01:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:41.522 15:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.523 15:01:24 -- common/autotest_common.sh@10 -- # set +x 00:24:41.523 15:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.523 15:01:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.523 15:01:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.523 15:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.523 15:01:24 -- common/autotest_common.sh@10 -- # set +x 00:24:41.523 15:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.523 15:01:24 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:41.523 15:01:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:41.523 15:01:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:41.523 15:01:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:41.523 15:01:24 -- host/auth.sh@44 -- # digest=sha256 00:24:41.523 15:01:24 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:41.523 15:01:24 -- host/auth.sh@44 -- # keyid=0 00:24:41.523 15:01:24 -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:24:41.523 15:01:24 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:41.523 15:01:24 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:41.523 15:01:24 -- host/auth.sh@49 -- # echo DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:24:41.523 15:01:24 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:24:41.523 15:01:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:41.523 15:01:24 -- host/auth.sh@68 -- # digest=sha256 00:24:41.523 15:01:24 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:41.523 15:01:24 -- host/auth.sh@68 -- # keyid=0 00:24:41.523 15:01:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:41.523 15:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.523 15:01:24 -- common/autotest_common.sh@10 -- # set +x 00:24:41.523 15:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.523 15:01:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:41.523 15:01:24 -- nvmf/common.sh@717 -- # local ip 00:24:41.523 15:01:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:41.523 15:01:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:41.523 15:01:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.523 15:01:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.523 15:01:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:41.523 15:01:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.523 15:01:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:41.523 15:01:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:41.523 15:01:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:41.523 15:01:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:41.523 15:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.523 15:01:24 -- common/autotest_common.sh@10 -- # set +x 00:24:42.094 nvme0n1 00:24:42.094 15:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.094 15:01:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.094 15:01:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:42.094 15:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.094 15:01:24 -- common/autotest_common.sh@10 -- # set +x 00:24:42.094 15:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.094 15:01:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.094 15:01:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.094 15:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.094 15:01:24 -- common/autotest_common.sh@10 -- # set +x 00:24:42.094 15:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.094 15:01:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:42.094 15:01:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:42.094 15:01:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:42.094 15:01:24 -- host/auth.sh@44 -- # digest=sha256 00:24:42.094 15:01:24 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:42.094 15:01:24 -- host/auth.sh@44 -- # keyid=1 00:24:42.094 15:01:24 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:24:42.094 15:01:24 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:42.094 15:01:24 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:42.094 15:01:24 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:24:42.094 15:01:24 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:24:42.094 15:01:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:42.094 15:01:24 -- host/auth.sh@68 -- # digest=sha256 00:24:42.094 15:01:24 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:42.094 15:01:24 -- host/auth.sh@68 -- # keyid=1 00:24:42.094 15:01:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:42.094 15:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.094 15:01:24 -- common/autotest_common.sh@10 -- # set +x 00:24:42.094 15:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.094 15:01:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:42.094 15:01:24 -- nvmf/common.sh@717 -- # local ip 00:24:42.094 15:01:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:42.094 15:01:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:42.094 15:01:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.094 15:01:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.094 15:01:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:42.094 15:01:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.094 15:01:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:42.094 15:01:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:42.094 15:01:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:42.094 15:01:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:42.094 15:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.094 15:01:24 -- common/autotest_common.sh@10 -- # set +x 00:24:42.663 nvme0n1 00:24:42.663 15:01:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.664 15:01:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.664 15:01:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.664 15:01:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:42.664 15:01:25 -- common/autotest_common.sh@10 -- # set +x 00:24:42.664 15:01:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.664 15:01:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.664 15:01:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.664 15:01:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.664 15:01:25 -- common/autotest_common.sh@10 -- # set +x 00:24:42.664 15:01:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.664 15:01:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:42.664 15:01:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:42.664 15:01:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:42.664 15:01:25 -- host/auth.sh@44 -- # digest=sha256 00:24:42.664 15:01:25 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:42.664 15:01:25 -- host/auth.sh@44 -- # keyid=2 00:24:42.664 15:01:25 -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:24:42.664 15:01:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:42.664 15:01:25 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:42.664 15:01:25 -- host/auth.sh@49 -- # echo DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:24:42.664 15:01:25 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:24:42.664 15:01:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:42.664 15:01:25 -- host/auth.sh@68 -- # digest=sha256 00:24:42.664 15:01:25 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:42.664 15:01:25 -- host/auth.sh@68 -- # keyid=2 00:24:42.664 15:01:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:42.664 15:01:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.664 15:01:25 -- common/autotest_common.sh@10 -- # set +x 00:24:42.664 15:01:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.664 15:01:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:42.664 15:01:25 -- nvmf/common.sh@717 -- # local ip 00:24:42.664 15:01:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:42.664 15:01:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:42.664 15:01:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.664 15:01:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.664 15:01:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:42.664 15:01:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.664 15:01:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:42.664 15:01:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:42.664 15:01:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:42.664 15:01:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:42.664 15:01:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.664 15:01:25 -- common/autotest_common.sh@10 -- # set +x 00:24:43.234 nvme0n1 00:24:43.235 15:01:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.235 15:01:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.235 15:01:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:43.235 15:01:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.235 15:01:25 -- common/autotest_common.sh@10 -- # set +x 00:24:43.235 15:01:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.235 15:01:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.235 15:01:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.235 15:01:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.235 15:01:25 -- common/autotest_common.sh@10 -- # set +x 00:24:43.235 15:01:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.235 15:01:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:43.235 15:01:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:43.235 15:01:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:43.235 15:01:25 -- host/auth.sh@44 -- # digest=sha256 00:24:43.235 15:01:25 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:43.235 15:01:25 -- host/auth.sh@44 -- # keyid=3 00:24:43.235 15:01:25 -- host/auth.sh@45 -- # key=DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:24:43.235 15:01:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:43.235 15:01:25 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:43.235 15:01:25 -- host/auth.sh@49 -- # echo DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:24:43.235 15:01:25 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:24:43.235 15:01:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:43.235 15:01:25 -- host/auth.sh@68 -- # digest=sha256 00:24:43.235 15:01:25 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:43.235 15:01:25 -- host/auth.sh@68 -- # keyid=3 00:24:43.235 15:01:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:43.235 15:01:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.235 15:01:25 -- common/autotest_common.sh@10 -- # set +x 00:24:43.235 15:01:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.235 15:01:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:43.235 15:01:25 -- nvmf/common.sh@717 -- # local ip 00:24:43.235 15:01:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:43.235 15:01:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:43.235 15:01:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.235 15:01:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.235 15:01:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:43.235 15:01:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.235 15:01:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:43.235 15:01:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:43.235 15:01:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:43.235 15:01:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:43.235 15:01:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.235 15:01:25 -- common/autotest_common.sh@10 -- # set +x 00:24:43.815 nvme0n1 00:24:43.815 15:01:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.815 15:01:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.815 15:01:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:43.815 15:01:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.815 15:01:26 -- common/autotest_common.sh@10 -- # set +x 00:24:43.815 15:01:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.815 15:01:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.815 15:01:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.815 15:01:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.815 15:01:26 -- common/autotest_common.sh@10 -- # set +x 00:24:43.815 15:01:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.815 15:01:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:43.815 15:01:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:43.815 15:01:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:43.815 15:01:26 -- host/auth.sh@44 -- # digest=sha256 00:24:43.815 15:01:26 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:43.815 15:01:26 -- host/auth.sh@44 -- # keyid=4 00:24:43.815 15:01:26 -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:24:43.815 15:01:26 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:43.815 15:01:26 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:43.815 15:01:26 -- host/auth.sh@49 -- # echo DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:24:43.815 15:01:26 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:24:43.815 15:01:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:43.815 15:01:26 -- host/auth.sh@68 -- # digest=sha256 00:24:43.815 15:01:26 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:43.815 15:01:26 -- host/auth.sh@68 -- # keyid=4 00:24:43.816 15:01:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:43.816 15:01:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.816 15:01:26 -- common/autotest_common.sh@10 -- # set +x 00:24:43.816 15:01:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.816 15:01:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:43.816 15:01:26 -- nvmf/common.sh@717 -- # local ip 00:24:43.816 15:01:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:43.816 15:01:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:43.816 15:01:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.816 15:01:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.816 15:01:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:43.816 15:01:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.816 15:01:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:43.816 15:01:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:43.816 15:01:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:43.816 15:01:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:43.816 15:01:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.816 15:01:26 -- common/autotest_common.sh@10 -- # set +x 00:24:44.076 nvme0n1 00:24:44.076 15:01:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.076 15:01:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.076 15:01:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.076 15:01:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:44.076 15:01:26 -- common/autotest_common.sh@10 -- # set +x 00:24:44.336 15:01:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.336 15:01:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.336 15:01:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.336 15:01:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.336 15:01:26 -- common/autotest_common.sh@10 -- # set +x 00:24:44.336 15:01:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.336 15:01:26 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:44.336 15:01:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:44.336 15:01:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:44.336 15:01:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:44.336 15:01:26 -- host/auth.sh@44 -- # digest=sha256 00:24:44.336 15:01:26 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:44.336 15:01:26 -- host/auth.sh@44 -- # keyid=0 00:24:44.336 15:01:26 -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:24:44.336 15:01:26 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:44.336 15:01:26 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:44.336 15:01:26 -- host/auth.sh@49 -- # echo DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:24:44.336 15:01:26 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:24:44.336 15:01:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:44.336 15:01:26 -- host/auth.sh@68 -- # digest=sha256 00:24:44.336 15:01:26 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:44.336 15:01:26 -- host/auth.sh@68 -- # keyid=0 00:24:44.336 15:01:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:44.336 15:01:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.336 15:01:26 -- common/autotest_common.sh@10 -- # set +x 00:24:44.336 15:01:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.336 15:01:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:44.336 15:01:26 -- nvmf/common.sh@717 -- # local ip 00:24:44.336 15:01:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:44.336 15:01:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:44.336 15:01:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.336 15:01:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.336 15:01:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:44.336 15:01:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.336 15:01:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:44.336 15:01:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:44.336 15:01:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:44.336 15:01:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:44.336 15:01:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.336 15:01:26 -- common/autotest_common.sh@10 -- # set +x 00:24:44.906 nvme0n1 00:24:44.906 15:01:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.906 15:01:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.906 15:01:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:44.906 15:01:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.906 15:01:27 -- common/autotest_common.sh@10 -- # set +x 00:24:45.167 15:01:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.167 15:01:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.167 15:01:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.167 15:01:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.167 15:01:27 -- common/autotest_common.sh@10 -- # set +x 00:24:45.167 15:01:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.167 15:01:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:45.167 15:01:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:45.167 15:01:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:45.167 15:01:27 -- host/auth.sh@44 -- # digest=sha256 00:24:45.167 15:01:27 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:45.167 15:01:27 -- host/auth.sh@44 -- # keyid=1 00:24:45.167 15:01:27 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:24:45.167 15:01:27 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:45.167 15:01:27 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:45.167 15:01:27 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:24:45.167 15:01:27 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:24:45.167 15:01:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:45.167 15:01:27 -- host/auth.sh@68 -- # digest=sha256 00:24:45.167 15:01:27 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:45.167 15:01:27 -- host/auth.sh@68 -- # keyid=1 00:24:45.167 15:01:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:45.167 15:01:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.167 15:01:27 -- common/autotest_common.sh@10 -- # set +x 00:24:45.167 15:01:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.167 15:01:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:45.167 15:01:27 -- nvmf/common.sh@717 -- # local ip 00:24:45.167 15:01:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:45.167 15:01:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:45.167 15:01:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.167 15:01:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.167 15:01:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:45.167 15:01:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.167 15:01:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:45.167 15:01:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:45.167 15:01:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:45.167 15:01:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:45.167 15:01:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.167 15:01:27 -- common/autotest_common.sh@10 -- # set +x 00:24:45.736 nvme0n1 00:24:45.736 15:01:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.997 15:01:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.997 15:01:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:45.997 15:01:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.997 15:01:28 -- common/autotest_common.sh@10 -- # set +x 00:24:45.997 15:01:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.997 15:01:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.997 15:01:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.997 15:01:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.997 15:01:28 -- common/autotest_common.sh@10 -- # set +x 00:24:45.997 15:01:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.997 15:01:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:45.997 15:01:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:45.997 15:01:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:45.997 15:01:28 -- host/auth.sh@44 -- # digest=sha256 00:24:45.997 15:01:28 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:45.997 15:01:28 -- host/auth.sh@44 -- # keyid=2 00:24:45.997 15:01:28 -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:24:45.997 15:01:28 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:45.997 15:01:28 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:45.997 15:01:28 -- host/auth.sh@49 -- # echo DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:24:45.997 15:01:28 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:24:45.997 15:01:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:45.997 15:01:28 -- host/auth.sh@68 -- # digest=sha256 00:24:45.997 15:01:28 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:45.997 15:01:28 -- host/auth.sh@68 -- # keyid=2 00:24:45.997 15:01:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:45.997 15:01:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.997 15:01:28 -- common/autotest_common.sh@10 -- # set +x 00:24:45.997 15:01:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.997 15:01:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:45.997 15:01:28 -- nvmf/common.sh@717 -- # local ip 00:24:45.997 15:01:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:45.997 15:01:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:45.997 15:01:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.997 15:01:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.997 15:01:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:45.997 15:01:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.997 15:01:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:45.997 15:01:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:45.997 15:01:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:45.997 15:01:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:45.997 15:01:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.997 15:01:28 -- common/autotest_common.sh@10 -- # set +x 00:24:46.569 nvme0n1 00:24:46.569 15:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.569 15:01:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.569 15:01:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:46.570 15:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.570 15:01:29 -- common/autotest_common.sh@10 -- # set +x 00:24:46.860 15:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.860 15:01:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.860 15:01:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.860 15:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.860 15:01:29 -- common/autotest_common.sh@10 -- # set +x 00:24:46.860 15:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.860 15:01:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:46.860 15:01:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:46.860 15:01:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:46.860 15:01:29 -- host/auth.sh@44 -- # digest=sha256 00:24:46.860 15:01:29 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:46.860 15:01:29 -- host/auth.sh@44 -- # keyid=3 00:24:46.860 15:01:29 -- host/auth.sh@45 -- # key=DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:24:46.860 15:01:29 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:46.860 15:01:29 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:46.860 15:01:29 -- host/auth.sh@49 -- # echo DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:24:46.860 15:01:29 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:24:46.860 15:01:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:46.860 15:01:29 -- host/auth.sh@68 -- # digest=sha256 00:24:46.860 15:01:29 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:46.860 15:01:29 -- host/auth.sh@68 -- # keyid=3 00:24:46.860 15:01:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:46.860 15:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.860 15:01:29 -- common/autotest_common.sh@10 -- # set +x 00:24:46.860 15:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.860 15:01:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:46.860 15:01:29 -- nvmf/common.sh@717 -- # local ip 00:24:46.860 15:01:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:46.860 15:01:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:46.860 15:01:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.860 15:01:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.860 15:01:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:46.860 15:01:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.860 15:01:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:46.860 15:01:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:46.860 15:01:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:46.860 15:01:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:46.860 15:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.860 15:01:29 -- common/autotest_common.sh@10 -- # set +x 00:24:47.433 nvme0n1 00:24:47.433 15:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.433 15:01:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.433 15:01:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:47.434 15:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.434 15:01:30 -- common/autotest_common.sh@10 -- # set +x 00:24:47.434 15:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.694 15:01:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.694 15:01:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.694 15:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.694 15:01:30 -- common/autotest_common.sh@10 -- # set +x 00:24:47.694 15:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.694 15:01:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:47.694 15:01:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:47.694 15:01:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:47.694 15:01:30 -- host/auth.sh@44 -- # digest=sha256 00:24:47.694 15:01:30 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:47.694 15:01:30 -- host/auth.sh@44 -- # keyid=4 00:24:47.694 15:01:30 -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:24:47.694 15:01:30 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:47.694 15:01:30 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:47.694 15:01:30 -- host/auth.sh@49 -- # echo DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:24:47.694 15:01:30 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:24:47.694 15:01:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:47.694 15:01:30 -- host/auth.sh@68 -- # digest=sha256 00:24:47.694 15:01:30 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:47.694 15:01:30 -- host/auth.sh@68 -- # keyid=4 00:24:47.694 15:01:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:47.694 15:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.694 15:01:30 -- common/autotest_common.sh@10 -- # set +x 00:24:47.694 15:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.694 15:01:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:47.694 15:01:30 -- nvmf/common.sh@717 -- # local ip 00:24:47.694 15:01:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:47.694 15:01:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:47.694 15:01:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.694 15:01:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.694 15:01:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:47.694 15:01:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.694 15:01:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:47.694 15:01:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:47.694 15:01:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:47.694 15:01:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:47.694 15:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.694 15:01:30 -- common/autotest_common.sh@10 -- # set +x 00:24:48.264 nvme0n1 00:24:48.264 15:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.264 15:01:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.264 15:01:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:48.264 15:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.264 15:01:30 -- common/autotest_common.sh@10 -- # set +x 00:24:48.264 15:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.264 15:01:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.264 15:01:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.264 15:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.264 15:01:30 -- common/autotest_common.sh@10 -- # set +x 00:24:48.264 15:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.264 15:01:30 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:24:48.264 15:01:30 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:48.264 15:01:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:48.264 15:01:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:48.264 15:01:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:48.264 15:01:30 -- host/auth.sh@44 -- # digest=sha384 00:24:48.264 15:01:30 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:48.264 15:01:30 -- host/auth.sh@44 -- # keyid=0 00:24:48.264 15:01:30 -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:24:48.264 15:01:30 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:48.264 15:01:30 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:48.264 15:01:30 -- host/auth.sh@49 -- # echo DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:24:48.264 15:01:30 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:24:48.264 15:01:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:48.264 15:01:30 -- host/auth.sh@68 -- # digest=sha384 00:24:48.264 15:01:30 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:48.264 15:01:30 -- host/auth.sh@68 -- # keyid=0 00:24:48.264 15:01:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:48.264 15:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.264 15:01:30 -- common/autotest_common.sh@10 -- # set +x 00:24:48.264 15:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.264 15:01:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:48.264 15:01:30 -- nvmf/common.sh@717 -- # local ip 00:24:48.264 15:01:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:48.264 15:01:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:48.264 15:01:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.264 15:01:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.264 15:01:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:48.264 15:01:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.264 15:01:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:48.264 15:01:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:48.265 15:01:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:48.265 15:01:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:48.265 15:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.265 15:01:30 -- common/autotest_common.sh@10 -- # set +x 00:24:48.524 nvme0n1 00:24:48.524 15:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.524 15:01:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.524 15:01:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:48.524 15:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.524 15:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:48.524 15:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.524 15:01:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.524 15:01:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.524 15:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.524 15:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:48.524 15:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.524 15:01:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:48.524 15:01:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:48.524 15:01:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:48.524 15:01:31 -- host/auth.sh@44 -- # digest=sha384 00:24:48.524 15:01:31 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:48.524 15:01:31 -- host/auth.sh@44 -- # keyid=1 00:24:48.524 15:01:31 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:24:48.524 15:01:31 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:48.524 15:01:31 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:48.524 15:01:31 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:24:48.524 15:01:31 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:24:48.525 15:01:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:48.525 15:01:31 -- host/auth.sh@68 -- # digest=sha384 00:24:48.525 15:01:31 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:48.525 15:01:31 -- host/auth.sh@68 -- # keyid=1 00:24:48.525 15:01:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:48.525 15:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.525 15:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:48.525 15:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.525 15:01:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:48.525 15:01:31 -- nvmf/common.sh@717 -- # local ip 00:24:48.525 15:01:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:48.525 15:01:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:48.525 15:01:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.525 15:01:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.525 15:01:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:48.525 15:01:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.525 15:01:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:48.525 15:01:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:48.525 15:01:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:48.525 15:01:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:48.525 15:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.525 15:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:48.786 nvme0n1 00:24:48.786 15:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.786 15:01:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.786 15:01:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:48.786 15:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.786 15:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:48.786 15:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.786 15:01:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.786 15:01:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.786 15:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.786 15:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:48.786 15:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.786 15:01:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:48.786 15:01:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:48.786 15:01:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:48.786 15:01:31 -- host/auth.sh@44 -- # digest=sha384 00:24:48.786 15:01:31 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:48.786 15:01:31 -- host/auth.sh@44 -- # keyid=2 00:24:48.786 15:01:31 -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:24:48.786 15:01:31 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:48.786 15:01:31 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:48.786 15:01:31 -- host/auth.sh@49 -- # echo DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:24:48.786 15:01:31 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:24:48.786 15:01:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:48.786 15:01:31 -- host/auth.sh@68 -- # digest=sha384 00:24:48.786 15:01:31 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:48.786 15:01:31 -- host/auth.sh@68 -- # keyid=2 00:24:48.786 15:01:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:48.786 15:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.786 15:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:48.786 15:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.786 15:01:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:48.786 15:01:31 -- nvmf/common.sh@717 -- # local ip 00:24:48.786 15:01:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:48.786 15:01:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:48.786 15:01:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.786 15:01:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.786 15:01:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:48.786 15:01:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.786 15:01:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:48.786 15:01:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:48.786 15:01:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:48.786 15:01:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:48.786 15:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.786 15:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:49.046 nvme0n1 00:24:49.046 15:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.046 15:01:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.046 15:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.046 15:01:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:49.046 15:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:49.046 15:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.046 15:01:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.046 15:01:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.046 15:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.046 15:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:49.046 15:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.046 15:01:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:49.046 15:01:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:49.046 15:01:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:49.046 15:01:31 -- host/auth.sh@44 -- # digest=sha384 00:24:49.046 15:01:31 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:49.046 15:01:31 -- host/auth.sh@44 -- # keyid=3 00:24:49.046 15:01:31 -- host/auth.sh@45 -- # key=DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:24:49.046 15:01:31 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:49.046 15:01:31 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:49.046 15:01:31 -- host/auth.sh@49 -- # echo DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:24:49.046 15:01:31 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:24:49.046 15:01:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:49.046 15:01:31 -- host/auth.sh@68 -- # digest=sha384 00:24:49.046 15:01:31 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:49.046 15:01:31 -- host/auth.sh@68 -- # keyid=3 00:24:49.046 15:01:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:49.046 15:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.046 15:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:49.046 15:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.046 15:01:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:49.046 15:01:31 -- nvmf/common.sh@717 -- # local ip 00:24:49.046 15:01:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:49.046 15:01:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:49.046 15:01:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.046 15:01:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.046 15:01:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:49.046 15:01:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.046 15:01:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:49.046 15:01:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:49.046 15:01:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:49.046 15:01:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:49.046 15:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.046 15:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:49.306 nvme0n1 00:24:49.306 15:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.306 15:01:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.306 15:01:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:49.306 15:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.306 15:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:49.306 15:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.306 15:01:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.306 15:01:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.306 15:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.306 15:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:49.306 15:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.306 15:01:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:49.306 15:01:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:49.306 15:01:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:49.306 15:01:31 -- host/auth.sh@44 -- # digest=sha384 00:24:49.306 15:01:31 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:49.306 15:01:31 -- host/auth.sh@44 -- # keyid=4 00:24:49.306 15:01:31 -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:24:49.306 15:01:31 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:49.306 15:01:31 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:49.306 15:01:31 -- host/auth.sh@49 -- # echo DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:24:49.306 15:01:31 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:24:49.306 15:01:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:49.306 15:01:31 -- host/auth.sh@68 -- # digest=sha384 00:24:49.306 15:01:31 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:49.306 15:01:31 -- host/auth.sh@68 -- # keyid=4 00:24:49.306 15:01:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:49.306 15:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.306 15:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:49.306 15:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.306 15:01:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:49.306 15:01:31 -- nvmf/common.sh@717 -- # local ip 00:24:49.306 15:01:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:49.306 15:01:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:49.306 15:01:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.306 15:01:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.306 15:01:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:49.306 15:01:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.306 15:01:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:49.306 15:01:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:49.306 15:01:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:49.306 15:01:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:49.306 15:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.306 15:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:49.306 nvme0n1 00:24:49.306 15:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.306 15:01:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.306 15:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.306 15:01:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:49.306 15:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:49.306 15:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.566 15:01:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.566 15:01:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.566 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.566 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:24:49.566 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.566 15:01:32 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:49.566 15:01:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:49.566 15:01:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:49.566 15:01:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:49.566 15:01:32 -- host/auth.sh@44 -- # digest=sha384 00:24:49.566 15:01:32 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:49.566 15:01:32 -- host/auth.sh@44 -- # keyid=0 00:24:49.566 15:01:32 -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:24:49.566 15:01:32 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:49.566 15:01:32 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:49.566 15:01:32 -- host/auth.sh@49 -- # echo DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:24:49.566 15:01:32 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:24:49.566 15:01:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:49.566 15:01:32 -- host/auth.sh@68 -- # digest=sha384 00:24:49.566 15:01:32 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:49.566 15:01:32 -- host/auth.sh@68 -- # keyid=0 00:24:49.566 15:01:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:49.566 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.566 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:24:49.566 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.566 15:01:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:49.566 15:01:32 -- nvmf/common.sh@717 -- # local ip 00:24:49.566 15:01:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:49.566 15:01:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:49.566 15:01:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.566 15:01:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.566 15:01:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:49.566 15:01:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.566 15:01:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:49.566 15:01:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:49.566 15:01:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:49.566 15:01:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:49.566 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.566 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:24:49.566 nvme0n1 00:24:49.566 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.566 15:01:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.566 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.566 15:01:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:49.566 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:24:49.828 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.828 15:01:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.828 15:01:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.828 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.828 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:24:49.829 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.829 15:01:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:49.829 15:01:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:49.829 15:01:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:49.829 15:01:32 -- host/auth.sh@44 -- # digest=sha384 00:24:49.829 15:01:32 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:49.829 15:01:32 -- host/auth.sh@44 -- # keyid=1 00:24:49.829 15:01:32 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:24:49.829 15:01:32 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:49.829 15:01:32 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:49.829 15:01:32 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:24:49.829 15:01:32 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:24:49.829 15:01:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:49.829 15:01:32 -- host/auth.sh@68 -- # digest=sha384 00:24:49.829 15:01:32 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:49.829 15:01:32 -- host/auth.sh@68 -- # keyid=1 00:24:49.829 15:01:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:49.829 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.829 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:24:49.829 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.829 15:01:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:49.829 15:01:32 -- nvmf/common.sh@717 -- # local ip 00:24:49.829 15:01:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:49.829 15:01:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:49.829 15:01:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.829 15:01:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.829 15:01:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:49.829 15:01:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.829 15:01:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:49.829 15:01:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:49.829 15:01:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:49.829 15:01:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:49.829 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.829 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:24:49.829 nvme0n1 00:24:49.829 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.829 15:01:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.829 15:01:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:49.829 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.829 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:24:50.088 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.088 15:01:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.088 15:01:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.088 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.088 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:24:50.088 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.088 15:01:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:50.088 15:01:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:50.088 15:01:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:50.088 15:01:32 -- host/auth.sh@44 -- # digest=sha384 00:24:50.088 15:01:32 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:50.088 15:01:32 -- host/auth.sh@44 -- # keyid=2 00:24:50.088 15:01:32 -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:24:50.088 15:01:32 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:50.088 15:01:32 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:50.088 15:01:32 -- host/auth.sh@49 -- # echo DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:24:50.088 15:01:32 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:24:50.088 15:01:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:50.088 15:01:32 -- host/auth.sh@68 -- # digest=sha384 00:24:50.088 15:01:32 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:50.088 15:01:32 -- host/auth.sh@68 -- # keyid=2 00:24:50.088 15:01:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:50.088 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.088 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:24:50.088 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.088 15:01:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:50.088 15:01:32 -- nvmf/common.sh@717 -- # local ip 00:24:50.088 15:01:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:50.088 15:01:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:50.088 15:01:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.088 15:01:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.088 15:01:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:50.088 15:01:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.088 15:01:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:50.088 15:01:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:50.088 15:01:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:50.088 15:01:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:50.088 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.088 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:24:50.349 nvme0n1 00:24:50.349 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.349 15:01:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.349 15:01:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:50.349 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.349 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:24:50.349 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.349 15:01:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.349 15:01:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.349 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.349 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:24:50.349 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.349 15:01:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:50.349 15:01:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:50.349 15:01:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:50.349 15:01:32 -- host/auth.sh@44 -- # digest=sha384 00:24:50.349 15:01:32 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:50.349 15:01:32 -- host/auth.sh@44 -- # keyid=3 00:24:50.349 15:01:32 -- host/auth.sh@45 -- # key=DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:24:50.349 15:01:32 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:50.349 15:01:32 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:50.349 15:01:32 -- host/auth.sh@49 -- # echo DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:24:50.349 15:01:32 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:24:50.349 15:01:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:50.349 15:01:32 -- host/auth.sh@68 -- # digest=sha384 00:24:50.349 15:01:32 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:50.349 15:01:32 -- host/auth.sh@68 -- # keyid=3 00:24:50.349 15:01:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:50.349 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.349 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:24:50.349 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.349 15:01:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:50.349 15:01:32 -- nvmf/common.sh@717 -- # local ip 00:24:50.349 15:01:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:50.349 15:01:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:50.349 15:01:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.349 15:01:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.349 15:01:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:50.349 15:01:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.349 15:01:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:50.349 15:01:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:50.349 15:01:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:50.349 15:01:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:50.349 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.349 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:24:50.610 nvme0n1 00:24:50.610 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.610 15:01:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.610 15:01:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:50.610 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.610 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:50.610 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.610 15:01:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.610 15:01:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.610 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.610 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:50.610 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.610 15:01:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:50.610 15:01:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:50.610 15:01:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:50.610 15:01:33 -- host/auth.sh@44 -- # digest=sha384 00:24:50.610 15:01:33 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:50.610 15:01:33 -- host/auth.sh@44 -- # keyid=4 00:24:50.610 15:01:33 -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:24:50.610 15:01:33 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:50.610 15:01:33 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:50.610 15:01:33 -- host/auth.sh@49 -- # echo DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:24:50.610 15:01:33 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:24:50.610 15:01:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:50.610 15:01:33 -- host/auth.sh@68 -- # digest=sha384 00:24:50.610 15:01:33 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:50.610 15:01:33 -- host/auth.sh@68 -- # keyid=4 00:24:50.610 15:01:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:50.610 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.610 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:50.610 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.610 15:01:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:50.610 15:01:33 -- nvmf/common.sh@717 -- # local ip 00:24:50.610 15:01:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:50.610 15:01:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:50.610 15:01:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.610 15:01:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.610 15:01:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:50.610 15:01:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.610 15:01:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:50.610 15:01:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:50.610 15:01:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:50.610 15:01:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:50.610 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.610 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:50.870 nvme0n1 00:24:50.870 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.870 15:01:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.870 15:01:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:50.870 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.870 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:50.870 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.870 15:01:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.870 15:01:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.870 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.870 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:50.870 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.870 15:01:33 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:50.870 15:01:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:50.870 15:01:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:50.870 15:01:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:50.870 15:01:33 -- host/auth.sh@44 -- # digest=sha384 00:24:50.870 15:01:33 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:50.870 15:01:33 -- host/auth.sh@44 -- # keyid=0 00:24:50.870 15:01:33 -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:24:50.870 15:01:33 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:50.870 15:01:33 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:50.870 15:01:33 -- host/auth.sh@49 -- # echo DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:24:50.870 15:01:33 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:24:50.870 15:01:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:50.870 15:01:33 -- host/auth.sh@68 -- # digest=sha384 00:24:50.870 15:01:33 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:50.870 15:01:33 -- host/auth.sh@68 -- # keyid=0 00:24:50.870 15:01:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:50.870 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.870 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:50.870 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.870 15:01:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:50.870 15:01:33 -- nvmf/common.sh@717 -- # local ip 00:24:50.870 15:01:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:50.870 15:01:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:50.870 15:01:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.870 15:01:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.870 15:01:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:50.870 15:01:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.870 15:01:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:50.870 15:01:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:50.870 15:01:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:50.870 15:01:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:50.870 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.870 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:51.130 nvme0n1 00:24:51.130 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.130 15:01:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.130 15:01:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:51.130 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.130 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:51.130 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.130 15:01:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.130 15:01:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.130 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.130 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:51.130 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.130 15:01:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:51.130 15:01:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:51.130 15:01:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:51.130 15:01:33 -- host/auth.sh@44 -- # digest=sha384 00:24:51.130 15:01:33 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:51.130 15:01:33 -- host/auth.sh@44 -- # keyid=1 00:24:51.130 15:01:33 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:24:51.130 15:01:33 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:51.130 15:01:33 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:51.130 15:01:33 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:24:51.130 15:01:33 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:24:51.130 15:01:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:51.130 15:01:33 -- host/auth.sh@68 -- # digest=sha384 00:24:51.130 15:01:33 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:51.130 15:01:33 -- host/auth.sh@68 -- # keyid=1 00:24:51.130 15:01:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:51.130 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.130 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:51.130 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.130 15:01:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:51.130 15:01:33 -- nvmf/common.sh@717 -- # local ip 00:24:51.130 15:01:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:51.130 15:01:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:51.130 15:01:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.130 15:01:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.130 15:01:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:51.130 15:01:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.130 15:01:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:51.130 15:01:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:51.130 15:01:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:51.130 15:01:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:51.130 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.130 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:51.401 nvme0n1 00:24:51.401 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.402 15:01:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.402 15:01:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:51.402 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.402 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:51.402 15:01:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.402 15:01:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.402 15:01:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.402 15:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.402 15:01:34 -- common/autotest_common.sh@10 -- # set +x 00:24:51.402 15:01:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.402 15:01:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:51.402 15:01:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:51.402 15:01:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:51.402 15:01:34 -- host/auth.sh@44 -- # digest=sha384 00:24:51.402 15:01:34 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:51.402 15:01:34 -- host/auth.sh@44 -- # keyid=2 00:24:51.402 15:01:34 -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:24:51.402 15:01:34 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:51.402 15:01:34 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:51.402 15:01:34 -- host/auth.sh@49 -- # echo DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:24:51.402 15:01:34 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:24:51.402 15:01:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:51.402 15:01:34 -- host/auth.sh@68 -- # digest=sha384 00:24:51.402 15:01:34 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:51.402 15:01:34 -- host/auth.sh@68 -- # keyid=2 00:24:51.402 15:01:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:51.402 15:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.402 15:01:34 -- common/autotest_common.sh@10 -- # set +x 00:24:51.664 15:01:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.664 15:01:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:51.664 15:01:34 -- nvmf/common.sh@717 -- # local ip 00:24:51.664 15:01:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:51.664 15:01:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:51.664 15:01:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.664 15:01:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.664 15:01:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:51.664 15:01:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.664 15:01:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:51.664 15:01:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:51.664 15:01:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:51.664 15:01:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:51.664 15:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.664 15:01:34 -- common/autotest_common.sh@10 -- # set +x 00:24:51.924 nvme0n1 00:24:51.924 15:01:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.924 15:01:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.924 15:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.924 15:01:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:51.924 15:01:34 -- common/autotest_common.sh@10 -- # set +x 00:24:51.924 15:01:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.924 15:01:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.924 15:01:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.924 15:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.924 15:01:34 -- common/autotest_common.sh@10 -- # set +x 00:24:51.924 15:01:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.924 15:01:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:51.924 15:01:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:51.924 15:01:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:51.924 15:01:34 -- host/auth.sh@44 -- # digest=sha384 00:24:51.924 15:01:34 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:51.924 15:01:34 -- host/auth.sh@44 -- # keyid=3 00:24:51.924 15:01:34 -- host/auth.sh@45 -- # key=DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:24:51.924 15:01:34 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:51.924 15:01:34 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:51.924 15:01:34 -- host/auth.sh@49 -- # echo DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:24:51.924 15:01:34 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:24:51.924 15:01:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:51.924 15:01:34 -- host/auth.sh@68 -- # digest=sha384 00:24:51.924 15:01:34 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:51.924 15:01:34 -- host/auth.sh@68 -- # keyid=3 00:24:51.924 15:01:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:51.924 15:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.924 15:01:34 -- common/autotest_common.sh@10 -- # set +x 00:24:51.924 15:01:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.924 15:01:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:51.924 15:01:34 -- nvmf/common.sh@717 -- # local ip 00:24:51.924 15:01:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:51.924 15:01:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:51.924 15:01:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.924 15:01:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.924 15:01:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:51.924 15:01:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.924 15:01:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:51.924 15:01:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:51.924 15:01:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:51.924 15:01:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:51.924 15:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.924 15:01:34 -- common/autotest_common.sh@10 -- # set +x 00:24:52.185 nvme0n1 00:24:52.185 15:01:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.185 15:01:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.185 15:01:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:52.185 15:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.185 15:01:34 -- common/autotest_common.sh@10 -- # set +x 00:24:52.185 15:01:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.185 15:01:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.185 15:01:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.185 15:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.185 15:01:34 -- common/autotest_common.sh@10 -- # set +x 00:24:52.185 15:01:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.185 15:01:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:52.185 15:01:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:52.185 15:01:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:52.185 15:01:34 -- host/auth.sh@44 -- # digest=sha384 00:24:52.185 15:01:34 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:52.185 15:01:34 -- host/auth.sh@44 -- # keyid=4 00:24:52.185 15:01:34 -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:24:52.185 15:01:34 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:52.185 15:01:34 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:52.185 15:01:34 -- host/auth.sh@49 -- # echo DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:24:52.185 15:01:34 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:24:52.185 15:01:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:52.185 15:01:34 -- host/auth.sh@68 -- # digest=sha384 00:24:52.185 15:01:34 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:52.185 15:01:34 -- host/auth.sh@68 -- # keyid=4 00:24:52.185 15:01:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:52.185 15:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.185 15:01:34 -- common/autotest_common.sh@10 -- # set +x 00:24:52.185 15:01:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.185 15:01:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:52.185 15:01:34 -- nvmf/common.sh@717 -- # local ip 00:24:52.185 15:01:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:52.185 15:01:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:52.185 15:01:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.185 15:01:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.185 15:01:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:52.185 15:01:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.185 15:01:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:52.185 15:01:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:52.185 15:01:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:52.185 15:01:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:52.185 15:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.185 15:01:34 -- common/autotest_common.sh@10 -- # set +x 00:24:52.446 nvme0n1 00:24:52.446 15:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.446 15:01:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:52.446 15:01:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.446 15:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.446 15:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:52.446 15:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.446 15:01:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.446 15:01:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.446 15:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.446 15:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:52.446 15:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.446 15:01:35 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:52.446 15:01:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:52.446 15:01:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:52.446 15:01:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:52.446 15:01:35 -- host/auth.sh@44 -- # digest=sha384 00:24:52.446 15:01:35 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:52.446 15:01:35 -- host/auth.sh@44 -- # keyid=0 00:24:52.446 15:01:35 -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:24:52.446 15:01:35 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:52.446 15:01:35 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:52.446 15:01:35 -- host/auth.sh@49 -- # echo DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:24:52.446 15:01:35 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:24:52.446 15:01:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:52.446 15:01:35 -- host/auth.sh@68 -- # digest=sha384 00:24:52.446 15:01:35 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:52.446 15:01:35 -- host/auth.sh@68 -- # keyid=0 00:24:52.446 15:01:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:52.446 15:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.446 15:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:52.446 15:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.708 15:01:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:52.708 15:01:35 -- nvmf/common.sh@717 -- # local ip 00:24:52.708 15:01:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:52.708 15:01:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:52.708 15:01:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.708 15:01:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.708 15:01:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:52.708 15:01:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.708 15:01:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:52.708 15:01:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:52.708 15:01:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:52.708 15:01:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:52.708 15:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.708 15:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:52.968 nvme0n1 00:24:52.968 15:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.968 15:01:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.968 15:01:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:52.968 15:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.968 15:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:52.968 15:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.968 15:01:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.968 15:01:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.968 15:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.968 15:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:53.227 15:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.227 15:01:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:53.227 15:01:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:53.227 15:01:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:53.227 15:01:35 -- host/auth.sh@44 -- # digest=sha384 00:24:53.227 15:01:35 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:53.227 15:01:35 -- host/auth.sh@44 -- # keyid=1 00:24:53.227 15:01:35 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:24:53.227 15:01:35 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:53.227 15:01:35 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:53.227 15:01:35 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:24:53.227 15:01:35 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:24:53.227 15:01:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:53.227 15:01:35 -- host/auth.sh@68 -- # digest=sha384 00:24:53.227 15:01:35 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:53.227 15:01:35 -- host/auth.sh@68 -- # keyid=1 00:24:53.227 15:01:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:53.227 15:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.227 15:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:53.227 15:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.227 15:01:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:53.227 15:01:35 -- nvmf/common.sh@717 -- # local ip 00:24:53.227 15:01:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:53.227 15:01:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:53.227 15:01:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.227 15:01:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.227 15:01:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:53.227 15:01:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.227 15:01:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:53.227 15:01:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:53.227 15:01:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:53.227 15:01:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:53.227 15:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.227 15:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:53.488 nvme0n1 00:24:53.488 15:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.488 15:01:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.488 15:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.488 15:01:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:53.488 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:24:53.488 15:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.749 15:01:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.749 15:01:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.749 15:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.749 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:24:53.749 15:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.749 15:01:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:53.749 15:01:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:53.749 15:01:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:53.749 15:01:36 -- host/auth.sh@44 -- # digest=sha384 00:24:53.749 15:01:36 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:53.749 15:01:36 -- host/auth.sh@44 -- # keyid=2 00:24:53.749 15:01:36 -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:24:53.749 15:01:36 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:53.749 15:01:36 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:53.749 15:01:36 -- host/auth.sh@49 -- # echo DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:24:53.749 15:01:36 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:24:53.749 15:01:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:53.749 15:01:36 -- host/auth.sh@68 -- # digest=sha384 00:24:53.749 15:01:36 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:53.749 15:01:36 -- host/auth.sh@68 -- # keyid=2 00:24:53.749 15:01:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:53.749 15:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.749 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:24:53.749 15:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.749 15:01:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:53.749 15:01:36 -- nvmf/common.sh@717 -- # local ip 00:24:53.749 15:01:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:53.749 15:01:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:53.749 15:01:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.749 15:01:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.749 15:01:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:53.749 15:01:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.749 15:01:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:53.749 15:01:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:53.749 15:01:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:53.749 15:01:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:53.749 15:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.749 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:24:54.009 nvme0n1 00:24:54.009 15:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.009 15:01:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.009 15:01:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:54.009 15:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.009 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:24:54.009 15:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.269 15:01:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.269 15:01:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.269 15:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.269 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:24:54.269 15:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.269 15:01:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:54.269 15:01:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:54.269 15:01:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:54.269 15:01:36 -- host/auth.sh@44 -- # digest=sha384 00:24:54.269 15:01:36 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:54.269 15:01:36 -- host/auth.sh@44 -- # keyid=3 00:24:54.269 15:01:36 -- host/auth.sh@45 -- # key=DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:24:54.269 15:01:36 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:54.269 15:01:36 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:54.269 15:01:36 -- host/auth.sh@49 -- # echo DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:24:54.269 15:01:36 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:24:54.269 15:01:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:54.269 15:01:36 -- host/auth.sh@68 -- # digest=sha384 00:24:54.269 15:01:36 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:54.269 15:01:36 -- host/auth.sh@68 -- # keyid=3 00:24:54.269 15:01:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:54.269 15:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.269 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:24:54.269 15:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.269 15:01:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:54.269 15:01:36 -- nvmf/common.sh@717 -- # local ip 00:24:54.269 15:01:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:54.269 15:01:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:54.269 15:01:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.269 15:01:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.269 15:01:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:54.269 15:01:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.269 15:01:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:54.269 15:01:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:54.269 15:01:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:54.269 15:01:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:54.269 15:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.269 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:24:54.840 nvme0n1 00:24:54.840 15:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.840 15:01:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.840 15:01:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:54.840 15:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.840 15:01:37 -- common/autotest_common.sh@10 -- # set +x 00:24:54.840 15:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.840 15:01:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.840 15:01:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.840 15:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.840 15:01:37 -- common/autotest_common.sh@10 -- # set +x 00:24:54.840 15:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.840 15:01:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:54.840 15:01:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:54.840 15:01:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:54.840 15:01:37 -- host/auth.sh@44 -- # digest=sha384 00:24:54.840 15:01:37 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:54.840 15:01:37 -- host/auth.sh@44 -- # keyid=4 00:24:54.840 15:01:37 -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:24:54.840 15:01:37 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:54.840 15:01:37 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:54.840 15:01:37 -- host/auth.sh@49 -- # echo DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:24:54.840 15:01:37 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:24:54.840 15:01:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:54.840 15:01:37 -- host/auth.sh@68 -- # digest=sha384 00:24:54.840 15:01:37 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:54.840 15:01:37 -- host/auth.sh@68 -- # keyid=4 00:24:54.840 15:01:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:54.840 15:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.840 15:01:37 -- common/autotest_common.sh@10 -- # set +x 00:24:54.840 15:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.840 15:01:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:54.840 15:01:37 -- nvmf/common.sh@717 -- # local ip 00:24:54.840 15:01:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:54.840 15:01:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:54.840 15:01:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.840 15:01:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.840 15:01:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:54.840 15:01:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.840 15:01:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:54.840 15:01:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:54.840 15:01:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:54.840 15:01:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:54.840 15:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.840 15:01:37 -- common/autotest_common.sh@10 -- # set +x 00:24:55.101 nvme0n1 00:24:55.101 15:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:55.101 15:01:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.101 15:01:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:55.101 15:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:55.101 15:01:37 -- common/autotest_common.sh@10 -- # set +x 00:24:55.101 15:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:55.362 15:01:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.362 15:01:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.362 15:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:55.362 15:01:37 -- common/autotest_common.sh@10 -- # set +x 00:24:55.362 15:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:55.362 15:01:37 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:55.362 15:01:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:55.362 15:01:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:55.362 15:01:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:55.362 15:01:37 -- host/auth.sh@44 -- # digest=sha384 00:24:55.362 15:01:37 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:55.362 15:01:37 -- host/auth.sh@44 -- # keyid=0 00:24:55.362 15:01:37 -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:24:55.362 15:01:37 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:55.362 15:01:37 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:55.362 15:01:37 -- host/auth.sh@49 -- # echo DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:24:55.362 15:01:37 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:24:55.362 15:01:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:55.362 15:01:37 -- host/auth.sh@68 -- # digest=sha384 00:24:55.362 15:01:37 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:55.362 15:01:37 -- host/auth.sh@68 -- # keyid=0 00:24:55.362 15:01:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:55.362 15:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:55.362 15:01:37 -- common/autotest_common.sh@10 -- # set +x 00:24:55.362 15:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:55.362 15:01:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:55.362 15:01:37 -- nvmf/common.sh@717 -- # local ip 00:24:55.362 15:01:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:55.362 15:01:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:55.362 15:01:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.362 15:01:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.362 15:01:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:55.362 15:01:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.362 15:01:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:55.362 15:01:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:55.362 15:01:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:55.362 15:01:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:55.362 15:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:55.362 15:01:37 -- common/autotest_common.sh@10 -- # set +x 00:24:55.936 nvme0n1 00:24:55.936 15:01:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:55.936 15:01:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.936 15:01:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:55.936 15:01:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:55.936 15:01:38 -- common/autotest_common.sh@10 -- # set +x 00:24:55.936 15:01:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.197 15:01:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.197 15:01:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.197 15:01:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.197 15:01:38 -- common/autotest_common.sh@10 -- # set +x 00:24:56.197 15:01:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.197 15:01:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:56.197 15:01:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:56.197 15:01:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:56.197 15:01:38 -- host/auth.sh@44 -- # digest=sha384 00:24:56.197 15:01:38 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:56.197 15:01:38 -- host/auth.sh@44 -- # keyid=1 00:24:56.197 15:01:38 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:24:56.197 15:01:38 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:56.197 15:01:38 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:56.197 15:01:38 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:24:56.197 15:01:38 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:24:56.197 15:01:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:56.197 15:01:38 -- host/auth.sh@68 -- # digest=sha384 00:24:56.197 15:01:38 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:56.197 15:01:38 -- host/auth.sh@68 -- # keyid=1 00:24:56.197 15:01:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:56.197 15:01:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.197 15:01:38 -- common/autotest_common.sh@10 -- # set +x 00:24:56.197 15:01:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.197 15:01:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:56.197 15:01:38 -- nvmf/common.sh@717 -- # local ip 00:24:56.197 15:01:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:56.197 15:01:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:56.197 15:01:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.197 15:01:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.197 15:01:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:56.197 15:01:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.197 15:01:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:56.197 15:01:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:56.197 15:01:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:56.197 15:01:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:56.197 15:01:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.197 15:01:38 -- common/autotest_common.sh@10 -- # set +x 00:24:56.768 nvme0n1 00:24:56.768 15:01:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.768 15:01:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.768 15:01:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.768 15:01:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:56.768 15:01:39 -- common/autotest_common.sh@10 -- # set +x 00:24:56.768 15:01:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.028 15:01:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.028 15:01:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.028 15:01:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.028 15:01:39 -- common/autotest_common.sh@10 -- # set +x 00:24:57.028 15:01:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.028 15:01:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:57.028 15:01:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:57.028 15:01:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:57.028 15:01:39 -- host/auth.sh@44 -- # digest=sha384 00:24:57.028 15:01:39 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:57.028 15:01:39 -- host/auth.sh@44 -- # keyid=2 00:24:57.028 15:01:39 -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:24:57.028 15:01:39 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:57.028 15:01:39 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:57.028 15:01:39 -- host/auth.sh@49 -- # echo DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:24:57.028 15:01:39 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:24:57.028 15:01:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:57.028 15:01:39 -- host/auth.sh@68 -- # digest=sha384 00:24:57.028 15:01:39 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:57.028 15:01:39 -- host/auth.sh@68 -- # keyid=2 00:24:57.028 15:01:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:57.028 15:01:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.028 15:01:39 -- common/autotest_common.sh@10 -- # set +x 00:24:57.028 15:01:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.028 15:01:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:57.028 15:01:39 -- nvmf/common.sh@717 -- # local ip 00:24:57.028 15:01:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:57.028 15:01:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:57.028 15:01:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.028 15:01:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.028 15:01:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:57.028 15:01:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.028 15:01:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:57.028 15:01:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:57.028 15:01:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:57.028 15:01:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:57.028 15:01:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.028 15:01:39 -- common/autotest_common.sh@10 -- # set +x 00:24:57.599 nvme0n1 00:24:57.599 15:01:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.599 15:01:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.599 15:01:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:57.599 15:01:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.599 15:01:40 -- common/autotest_common.sh@10 -- # set +x 00:24:57.599 15:01:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.860 15:01:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.860 15:01:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.860 15:01:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.860 15:01:40 -- common/autotest_common.sh@10 -- # set +x 00:24:57.860 15:01:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.860 15:01:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:57.860 15:01:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:57.860 15:01:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:57.860 15:01:40 -- host/auth.sh@44 -- # digest=sha384 00:24:57.860 15:01:40 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:57.860 15:01:40 -- host/auth.sh@44 -- # keyid=3 00:24:57.860 15:01:40 -- host/auth.sh@45 -- # key=DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:24:57.860 15:01:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:57.860 15:01:40 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:57.860 15:01:40 -- host/auth.sh@49 -- # echo DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:24:57.860 15:01:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:24:57.860 15:01:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:57.860 15:01:40 -- host/auth.sh@68 -- # digest=sha384 00:24:57.860 15:01:40 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:57.860 15:01:40 -- host/auth.sh@68 -- # keyid=3 00:24:57.860 15:01:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:57.860 15:01:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.860 15:01:40 -- common/autotest_common.sh@10 -- # set +x 00:24:57.860 15:01:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.860 15:01:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:57.860 15:01:40 -- nvmf/common.sh@717 -- # local ip 00:24:57.860 15:01:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:57.860 15:01:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:57.860 15:01:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.860 15:01:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.860 15:01:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:57.860 15:01:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.860 15:01:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:57.860 15:01:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:57.860 15:01:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:57.860 15:01:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:57.860 15:01:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.860 15:01:40 -- common/autotest_common.sh@10 -- # set +x 00:24:58.430 nvme0n1 00:24:58.430 15:01:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:58.430 15:01:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.430 15:01:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:58.430 15:01:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:58.430 15:01:41 -- common/autotest_common.sh@10 -- # set +x 00:24:58.430 15:01:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:58.692 15:01:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.692 15:01:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.692 15:01:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:58.692 15:01:41 -- common/autotest_common.sh@10 -- # set +x 00:24:58.692 15:01:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:58.692 15:01:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:58.692 15:01:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:58.692 15:01:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:58.692 15:01:41 -- host/auth.sh@44 -- # digest=sha384 00:24:58.692 15:01:41 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:58.692 15:01:41 -- host/auth.sh@44 -- # keyid=4 00:24:58.692 15:01:41 -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:24:58.692 15:01:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:58.692 15:01:41 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:58.692 15:01:41 -- host/auth.sh@49 -- # echo DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:24:58.692 15:01:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:24:58.692 15:01:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:58.692 15:01:41 -- host/auth.sh@68 -- # digest=sha384 00:24:58.692 15:01:41 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:58.692 15:01:41 -- host/auth.sh@68 -- # keyid=4 00:24:58.692 15:01:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:58.692 15:01:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:58.692 15:01:41 -- common/autotest_common.sh@10 -- # set +x 00:24:58.692 15:01:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:58.692 15:01:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:58.692 15:01:41 -- nvmf/common.sh@717 -- # local ip 00:24:58.692 15:01:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:58.692 15:01:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:58.692 15:01:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.692 15:01:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.692 15:01:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:58.692 15:01:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.692 15:01:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:58.692 15:01:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:58.692 15:01:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:58.692 15:01:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:58.692 15:01:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:58.692 15:01:41 -- common/autotest_common.sh@10 -- # set +x 00:24:59.264 nvme0n1 00:24:59.264 15:01:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.264 15:01:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.264 15:01:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:59.264 15:01:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.264 15:01:41 -- common/autotest_common.sh@10 -- # set +x 00:24:59.264 15:01:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.525 15:01:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.525 15:01:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.525 15:01:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.525 15:01:41 -- common/autotest_common.sh@10 -- # set +x 00:24:59.525 15:01:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.525 15:01:41 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:24:59.525 15:01:41 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:59.525 15:01:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:59.525 15:01:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:59.525 15:01:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:59.525 15:01:41 -- host/auth.sh@44 -- # digest=sha512 00:24:59.525 15:01:41 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:59.525 15:01:41 -- host/auth.sh@44 -- # keyid=0 00:24:59.525 15:01:41 -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:24:59.525 15:01:41 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:59.525 15:01:41 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:59.525 15:01:41 -- host/auth.sh@49 -- # echo DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:24:59.525 15:01:41 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:24:59.525 15:01:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:59.525 15:01:41 -- host/auth.sh@68 -- # digest=sha512 00:24:59.525 15:01:41 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:59.525 15:01:41 -- host/auth.sh@68 -- # keyid=0 00:24:59.525 15:01:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:59.525 15:01:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.525 15:01:41 -- common/autotest_common.sh@10 -- # set +x 00:24:59.525 15:01:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.525 15:01:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:59.525 15:01:41 -- nvmf/common.sh@717 -- # local ip 00:24:59.525 15:01:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:59.525 15:01:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:59.525 15:01:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.525 15:01:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.525 15:01:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:59.525 15:01:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.525 15:01:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:59.525 15:01:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:59.525 15:01:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:59.525 15:01:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:59.525 15:01:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.525 15:01:41 -- common/autotest_common.sh@10 -- # set +x 00:24:59.525 nvme0n1 00:24:59.525 15:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.525 15:01:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.525 15:01:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:59.525 15:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.525 15:01:42 -- common/autotest_common.sh@10 -- # set +x 00:24:59.525 15:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.525 15:01:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.525 15:01:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.525 15:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.525 15:01:42 -- common/autotest_common.sh@10 -- # set +x 00:24:59.525 15:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.525 15:01:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:59.525 15:01:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:59.525 15:01:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:59.525 15:01:42 -- host/auth.sh@44 -- # digest=sha512 00:24:59.525 15:01:42 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:59.525 15:01:42 -- host/auth.sh@44 -- # keyid=1 00:24:59.525 15:01:42 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:24:59.525 15:01:42 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:59.525 15:01:42 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:59.525 15:01:42 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:24:59.525 15:01:42 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:24:59.525 15:01:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:59.525 15:01:42 -- host/auth.sh@68 -- # digest=sha512 00:24:59.525 15:01:42 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:59.525 15:01:42 -- host/auth.sh@68 -- # keyid=1 00:24:59.525 15:01:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:59.525 15:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.525 15:01:42 -- common/autotest_common.sh@10 -- # set +x 00:24:59.786 15:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.786 15:01:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:59.786 15:01:42 -- nvmf/common.sh@717 -- # local ip 00:24:59.786 15:01:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:59.786 15:01:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:59.786 15:01:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.786 15:01:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.786 15:01:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:59.786 15:01:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.786 15:01:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:59.786 15:01:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:59.786 15:01:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:59.786 15:01:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:59.786 15:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.786 15:01:42 -- common/autotest_common.sh@10 -- # set +x 00:24:59.786 nvme0n1 00:24:59.786 15:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.786 15:01:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.786 15:01:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:59.786 15:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.786 15:01:42 -- common/autotest_common.sh@10 -- # set +x 00:24:59.786 15:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.786 15:01:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.786 15:01:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.786 15:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.786 15:01:42 -- common/autotest_common.sh@10 -- # set +x 00:24:59.786 15:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.786 15:01:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:59.786 15:01:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:59.786 15:01:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:59.786 15:01:42 -- host/auth.sh@44 -- # digest=sha512 00:24:59.786 15:01:42 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:59.786 15:01:42 -- host/auth.sh@44 -- # keyid=2 00:24:59.786 15:01:42 -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:24:59.786 15:01:42 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:59.786 15:01:42 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:59.786 15:01:42 -- host/auth.sh@49 -- # echo DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:24:59.786 15:01:42 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:24:59.786 15:01:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:59.786 15:01:42 -- host/auth.sh@68 -- # digest=sha512 00:24:59.786 15:01:42 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:59.786 15:01:42 -- host/auth.sh@68 -- # keyid=2 00:24:59.786 15:01:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:59.786 15:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.786 15:01:42 -- common/autotest_common.sh@10 -- # set +x 00:24:59.786 15:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.786 15:01:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:59.786 15:01:42 -- nvmf/common.sh@717 -- # local ip 00:24:59.786 15:01:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:59.786 15:01:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:59.786 15:01:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.786 15:01:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.786 15:01:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:59.786 15:01:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.786 15:01:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:59.786 15:01:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:59.786 15:01:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:59.786 15:01:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:59.786 15:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.786 15:01:42 -- common/autotest_common.sh@10 -- # set +x 00:25:00.048 nvme0n1 00:25:00.048 15:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.048 15:01:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.048 15:01:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:00.048 15:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.048 15:01:42 -- common/autotest_common.sh@10 -- # set +x 00:25:00.048 15:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.048 15:01:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.048 15:01:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.048 15:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.048 15:01:42 -- common/autotest_common.sh@10 -- # set +x 00:25:00.048 15:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.048 15:01:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:00.048 15:01:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:00.048 15:01:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:00.048 15:01:42 -- host/auth.sh@44 -- # digest=sha512 00:25:00.048 15:01:42 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:00.048 15:01:42 -- host/auth.sh@44 -- # keyid=3 00:25:00.048 15:01:42 -- host/auth.sh@45 -- # key=DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:25:00.048 15:01:42 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:00.048 15:01:42 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:00.048 15:01:42 -- host/auth.sh@49 -- # echo DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:25:00.048 15:01:42 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:25:00.048 15:01:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:00.048 15:01:42 -- host/auth.sh@68 -- # digest=sha512 00:25:00.048 15:01:42 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:00.048 15:01:42 -- host/auth.sh@68 -- # keyid=3 00:25:00.048 15:01:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:00.048 15:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.048 15:01:42 -- common/autotest_common.sh@10 -- # set +x 00:25:00.048 15:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.048 15:01:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:00.048 15:01:42 -- nvmf/common.sh@717 -- # local ip 00:25:00.048 15:01:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:00.048 15:01:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:00.048 15:01:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.048 15:01:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.048 15:01:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:00.048 15:01:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.048 15:01:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:00.048 15:01:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:00.048 15:01:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:00.048 15:01:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:00.048 15:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.048 15:01:42 -- common/autotest_common.sh@10 -- # set +x 00:25:00.310 nvme0n1 00:25:00.310 15:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.310 15:01:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.310 15:01:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:00.310 15:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.310 15:01:42 -- common/autotest_common.sh@10 -- # set +x 00:25:00.310 15:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.310 15:01:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.310 15:01:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.310 15:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.310 15:01:42 -- common/autotest_common.sh@10 -- # set +x 00:25:00.310 15:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.310 15:01:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:00.310 15:01:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:00.310 15:01:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:00.310 15:01:42 -- host/auth.sh@44 -- # digest=sha512 00:25:00.310 15:01:42 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:00.310 15:01:42 -- host/auth.sh@44 -- # keyid=4 00:25:00.310 15:01:42 -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:25:00.310 15:01:42 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:00.310 15:01:42 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:00.310 15:01:42 -- host/auth.sh@49 -- # echo DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:25:00.310 15:01:42 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:25:00.310 15:01:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:00.310 15:01:42 -- host/auth.sh@68 -- # digest=sha512 00:25:00.310 15:01:42 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:00.310 15:01:42 -- host/auth.sh@68 -- # keyid=4 00:25:00.310 15:01:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:00.310 15:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.310 15:01:42 -- common/autotest_common.sh@10 -- # set +x 00:25:00.310 15:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.310 15:01:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:00.310 15:01:42 -- nvmf/common.sh@717 -- # local ip 00:25:00.310 15:01:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:00.310 15:01:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:00.310 15:01:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.310 15:01:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.310 15:01:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:00.310 15:01:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.310 15:01:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:00.310 15:01:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:00.310 15:01:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:00.310 15:01:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:00.310 15:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.310 15:01:42 -- common/autotest_common.sh@10 -- # set +x 00:25:00.572 nvme0n1 00:25:00.572 15:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.572 15:01:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.572 15:01:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:00.572 15:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.572 15:01:43 -- common/autotest_common.sh@10 -- # set +x 00:25:00.572 15:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.572 15:01:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.572 15:01:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.572 15:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.572 15:01:43 -- common/autotest_common.sh@10 -- # set +x 00:25:00.572 15:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.572 15:01:43 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:00.572 15:01:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:00.572 15:01:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:00.572 15:01:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:00.572 15:01:43 -- host/auth.sh@44 -- # digest=sha512 00:25:00.572 15:01:43 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:00.572 15:01:43 -- host/auth.sh@44 -- # keyid=0 00:25:00.572 15:01:43 -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:25:00.572 15:01:43 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:00.572 15:01:43 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:00.572 15:01:43 -- host/auth.sh@49 -- # echo DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:25:00.572 15:01:43 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:25:00.572 15:01:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:00.572 15:01:43 -- host/auth.sh@68 -- # digest=sha512 00:25:00.572 15:01:43 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:00.572 15:01:43 -- host/auth.sh@68 -- # keyid=0 00:25:00.572 15:01:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:00.572 15:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.572 15:01:43 -- common/autotest_common.sh@10 -- # set +x 00:25:00.572 15:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.572 15:01:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:00.572 15:01:43 -- nvmf/common.sh@717 -- # local ip 00:25:00.572 15:01:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:00.572 15:01:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:00.572 15:01:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.572 15:01:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.572 15:01:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:00.572 15:01:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.572 15:01:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:00.572 15:01:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:00.572 15:01:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:00.572 15:01:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:00.572 15:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.572 15:01:43 -- common/autotest_common.sh@10 -- # set +x 00:25:00.832 nvme0n1 00:25:00.832 15:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.832 15:01:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.832 15:01:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:00.832 15:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.832 15:01:43 -- common/autotest_common.sh@10 -- # set +x 00:25:00.832 15:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.832 15:01:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.832 15:01:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.832 15:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.832 15:01:43 -- common/autotest_common.sh@10 -- # set +x 00:25:00.832 15:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.832 15:01:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:00.832 15:01:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:00.832 15:01:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:00.832 15:01:43 -- host/auth.sh@44 -- # digest=sha512 00:25:00.832 15:01:43 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:00.832 15:01:43 -- host/auth.sh@44 -- # keyid=1 00:25:00.832 15:01:43 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:25:00.832 15:01:43 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:00.832 15:01:43 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:00.832 15:01:43 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:25:00.832 15:01:43 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:25:00.832 15:01:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:00.832 15:01:43 -- host/auth.sh@68 -- # digest=sha512 00:25:00.832 15:01:43 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:00.832 15:01:43 -- host/auth.sh@68 -- # keyid=1 00:25:00.832 15:01:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:00.832 15:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.832 15:01:43 -- common/autotest_common.sh@10 -- # set +x 00:25:00.832 15:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.832 15:01:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:00.832 15:01:43 -- nvmf/common.sh@717 -- # local ip 00:25:00.832 15:01:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:00.832 15:01:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:00.832 15:01:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.832 15:01:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.832 15:01:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:00.832 15:01:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.832 15:01:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:00.832 15:01:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:00.832 15:01:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:00.832 15:01:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:00.832 15:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.832 15:01:43 -- common/autotest_common.sh@10 -- # set +x 00:25:01.091 nvme0n1 00:25:01.091 15:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.091 15:01:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.091 15:01:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:01.091 15:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.091 15:01:43 -- common/autotest_common.sh@10 -- # set +x 00:25:01.091 15:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.091 15:01:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.091 15:01:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.091 15:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.091 15:01:43 -- common/autotest_common.sh@10 -- # set +x 00:25:01.091 15:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.091 15:01:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:01.091 15:01:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:01.091 15:01:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:01.091 15:01:43 -- host/auth.sh@44 -- # digest=sha512 00:25:01.091 15:01:43 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:01.091 15:01:43 -- host/auth.sh@44 -- # keyid=2 00:25:01.091 15:01:43 -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:25:01.091 15:01:43 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:01.091 15:01:43 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:01.091 15:01:43 -- host/auth.sh@49 -- # echo DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:25:01.091 15:01:43 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:25:01.091 15:01:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:01.091 15:01:43 -- host/auth.sh@68 -- # digest=sha512 00:25:01.091 15:01:43 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:01.091 15:01:43 -- host/auth.sh@68 -- # keyid=2 00:25:01.091 15:01:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:01.091 15:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.091 15:01:43 -- common/autotest_common.sh@10 -- # set +x 00:25:01.091 15:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.091 15:01:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:01.091 15:01:43 -- nvmf/common.sh@717 -- # local ip 00:25:01.091 15:01:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:01.091 15:01:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:01.091 15:01:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.091 15:01:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.091 15:01:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:01.091 15:01:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.091 15:01:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:01.091 15:01:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:01.091 15:01:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:01.091 15:01:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:01.091 15:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.091 15:01:43 -- common/autotest_common.sh@10 -- # set +x 00:25:01.351 nvme0n1 00:25:01.351 15:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.351 15:01:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.351 15:01:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:01.351 15:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.351 15:01:43 -- common/autotest_common.sh@10 -- # set +x 00:25:01.351 15:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.351 15:01:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.351 15:01:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.351 15:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.351 15:01:43 -- common/autotest_common.sh@10 -- # set +x 00:25:01.351 15:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.351 15:01:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:01.351 15:01:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:01.351 15:01:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:01.351 15:01:43 -- host/auth.sh@44 -- # digest=sha512 00:25:01.351 15:01:43 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:01.351 15:01:43 -- host/auth.sh@44 -- # keyid=3 00:25:01.351 15:01:43 -- host/auth.sh@45 -- # key=DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:25:01.351 15:01:43 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:01.351 15:01:43 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:01.351 15:01:43 -- host/auth.sh@49 -- # echo DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:25:01.351 15:01:43 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:25:01.351 15:01:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:01.351 15:01:43 -- host/auth.sh@68 -- # digest=sha512 00:25:01.351 15:01:43 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:01.351 15:01:43 -- host/auth.sh@68 -- # keyid=3 00:25:01.351 15:01:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:01.351 15:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.351 15:01:43 -- common/autotest_common.sh@10 -- # set +x 00:25:01.351 15:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.351 15:01:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:01.351 15:01:43 -- nvmf/common.sh@717 -- # local ip 00:25:01.351 15:01:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:01.351 15:01:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:01.351 15:01:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.351 15:01:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.351 15:01:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:01.351 15:01:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.351 15:01:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:01.351 15:01:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:01.351 15:01:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:01.351 15:01:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:01.351 15:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.351 15:01:43 -- common/autotest_common.sh@10 -- # set +x 00:25:01.611 nvme0n1 00:25:01.611 15:01:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.611 15:01:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.611 15:01:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:01.611 15:01:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.611 15:01:44 -- common/autotest_common.sh@10 -- # set +x 00:25:01.611 15:01:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.611 15:01:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.611 15:01:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.611 15:01:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.611 15:01:44 -- common/autotest_common.sh@10 -- # set +x 00:25:01.611 15:01:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.611 15:01:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:01.611 15:01:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:01.611 15:01:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:01.611 15:01:44 -- host/auth.sh@44 -- # digest=sha512 00:25:01.611 15:01:44 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:01.611 15:01:44 -- host/auth.sh@44 -- # keyid=4 00:25:01.611 15:01:44 -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:25:01.611 15:01:44 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:01.611 15:01:44 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:01.611 15:01:44 -- host/auth.sh@49 -- # echo DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:25:01.611 15:01:44 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:25:01.611 15:01:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:01.611 15:01:44 -- host/auth.sh@68 -- # digest=sha512 00:25:01.611 15:01:44 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:01.611 15:01:44 -- host/auth.sh@68 -- # keyid=4 00:25:01.611 15:01:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:01.611 15:01:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.611 15:01:44 -- common/autotest_common.sh@10 -- # set +x 00:25:01.611 15:01:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.611 15:01:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:01.611 15:01:44 -- nvmf/common.sh@717 -- # local ip 00:25:01.611 15:01:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:01.611 15:01:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:01.611 15:01:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.611 15:01:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.611 15:01:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:01.611 15:01:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.611 15:01:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:01.611 15:01:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:01.611 15:01:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:01.611 15:01:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:01.611 15:01:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.611 15:01:44 -- common/autotest_common.sh@10 -- # set +x 00:25:01.871 nvme0n1 00:25:01.871 15:01:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.871 15:01:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.871 15:01:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:01.871 15:01:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.871 15:01:44 -- common/autotest_common.sh@10 -- # set +x 00:25:01.871 15:01:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.871 15:01:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.871 15:01:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.871 15:01:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.871 15:01:44 -- common/autotest_common.sh@10 -- # set +x 00:25:01.871 15:01:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.871 15:01:44 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:01.871 15:01:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:01.871 15:01:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:01.871 15:01:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:01.871 15:01:44 -- host/auth.sh@44 -- # digest=sha512 00:25:01.871 15:01:44 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:01.871 15:01:44 -- host/auth.sh@44 -- # keyid=0 00:25:01.872 15:01:44 -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:25:01.872 15:01:44 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:01.872 15:01:44 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:01.872 15:01:44 -- host/auth.sh@49 -- # echo DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:25:01.872 15:01:44 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:25:01.872 15:01:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:01.872 15:01:44 -- host/auth.sh@68 -- # digest=sha512 00:25:01.872 15:01:44 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:01.872 15:01:44 -- host/auth.sh@68 -- # keyid=0 00:25:01.872 15:01:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:01.872 15:01:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.872 15:01:44 -- common/autotest_common.sh@10 -- # set +x 00:25:01.872 15:01:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.872 15:01:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:01.872 15:01:44 -- nvmf/common.sh@717 -- # local ip 00:25:01.872 15:01:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:01.872 15:01:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:01.872 15:01:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.872 15:01:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.872 15:01:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:01.872 15:01:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.872 15:01:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:01.872 15:01:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:01.872 15:01:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:01.872 15:01:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:01.872 15:01:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.872 15:01:44 -- common/autotest_common.sh@10 -- # set +x 00:25:02.133 nvme0n1 00:25:02.133 15:01:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.133 15:01:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.133 15:01:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:02.133 15:01:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.133 15:01:44 -- common/autotest_common.sh@10 -- # set +x 00:25:02.133 15:01:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.133 15:01:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.133 15:01:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.133 15:01:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.133 15:01:44 -- common/autotest_common.sh@10 -- # set +x 00:25:02.133 15:01:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.133 15:01:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:02.133 15:01:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:02.133 15:01:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:02.133 15:01:44 -- host/auth.sh@44 -- # digest=sha512 00:25:02.133 15:01:44 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:02.133 15:01:44 -- host/auth.sh@44 -- # keyid=1 00:25:02.133 15:01:44 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:25:02.133 15:01:44 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:02.133 15:01:44 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:02.133 15:01:44 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:25:02.133 15:01:44 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:25:02.133 15:01:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:02.133 15:01:44 -- host/auth.sh@68 -- # digest=sha512 00:25:02.133 15:01:44 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:02.133 15:01:44 -- host/auth.sh@68 -- # keyid=1 00:25:02.133 15:01:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:02.133 15:01:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.133 15:01:44 -- common/autotest_common.sh@10 -- # set +x 00:25:02.133 15:01:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.133 15:01:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:02.133 15:01:44 -- nvmf/common.sh@717 -- # local ip 00:25:02.133 15:01:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:02.133 15:01:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:02.133 15:01:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.133 15:01:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.133 15:01:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:02.133 15:01:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.133 15:01:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:02.133 15:01:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:02.133 15:01:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:02.133 15:01:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:02.133 15:01:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.133 15:01:44 -- common/autotest_common.sh@10 -- # set +x 00:25:02.393 nvme0n1 00:25:02.653 15:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.653 15:01:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.653 15:01:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:02.653 15:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.653 15:01:45 -- common/autotest_common.sh@10 -- # set +x 00:25:02.653 15:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.653 15:01:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.653 15:01:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.653 15:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.653 15:01:45 -- common/autotest_common.sh@10 -- # set +x 00:25:02.653 15:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.653 15:01:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:02.653 15:01:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:02.653 15:01:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:02.653 15:01:45 -- host/auth.sh@44 -- # digest=sha512 00:25:02.653 15:01:45 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:02.653 15:01:45 -- host/auth.sh@44 -- # keyid=2 00:25:02.653 15:01:45 -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:25:02.653 15:01:45 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:02.653 15:01:45 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:02.653 15:01:45 -- host/auth.sh@49 -- # echo DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:25:02.653 15:01:45 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:25:02.653 15:01:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:02.653 15:01:45 -- host/auth.sh@68 -- # digest=sha512 00:25:02.654 15:01:45 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:02.654 15:01:45 -- host/auth.sh@68 -- # keyid=2 00:25:02.654 15:01:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:02.654 15:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.654 15:01:45 -- common/autotest_common.sh@10 -- # set +x 00:25:02.654 15:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.654 15:01:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:02.654 15:01:45 -- nvmf/common.sh@717 -- # local ip 00:25:02.654 15:01:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:02.654 15:01:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:02.654 15:01:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.654 15:01:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.654 15:01:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:02.654 15:01:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.654 15:01:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:02.654 15:01:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:02.654 15:01:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:02.654 15:01:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:02.654 15:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.654 15:01:45 -- common/autotest_common.sh@10 -- # set +x 00:25:02.914 nvme0n1 00:25:02.914 15:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.914 15:01:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.914 15:01:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:02.914 15:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.914 15:01:45 -- common/autotest_common.sh@10 -- # set +x 00:25:02.914 15:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.914 15:01:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.914 15:01:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.914 15:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.914 15:01:45 -- common/autotest_common.sh@10 -- # set +x 00:25:02.914 15:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.914 15:01:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:02.914 15:01:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:02.914 15:01:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:02.914 15:01:45 -- host/auth.sh@44 -- # digest=sha512 00:25:02.914 15:01:45 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:02.914 15:01:45 -- host/auth.sh@44 -- # keyid=3 00:25:02.914 15:01:45 -- host/auth.sh@45 -- # key=DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:25:02.914 15:01:45 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:02.914 15:01:45 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:02.914 15:01:45 -- host/auth.sh@49 -- # echo DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:25:02.914 15:01:45 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:25:02.914 15:01:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:02.914 15:01:45 -- host/auth.sh@68 -- # digest=sha512 00:25:02.914 15:01:45 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:02.914 15:01:45 -- host/auth.sh@68 -- # keyid=3 00:25:02.914 15:01:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:02.914 15:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.914 15:01:45 -- common/autotest_common.sh@10 -- # set +x 00:25:02.914 15:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.914 15:01:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:02.914 15:01:45 -- nvmf/common.sh@717 -- # local ip 00:25:02.914 15:01:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:02.914 15:01:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:02.915 15:01:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.915 15:01:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.915 15:01:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:02.915 15:01:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.915 15:01:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:02.915 15:01:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:02.915 15:01:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:02.915 15:01:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:02.915 15:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.915 15:01:45 -- common/autotest_common.sh@10 -- # set +x 00:25:03.178 nvme0n1 00:25:03.178 15:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.178 15:01:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.178 15:01:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:03.178 15:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.178 15:01:45 -- common/autotest_common.sh@10 -- # set +x 00:25:03.178 15:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.178 15:01:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.178 15:01:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.178 15:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.178 15:01:45 -- common/autotest_common.sh@10 -- # set +x 00:25:03.178 15:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.178 15:01:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:03.178 15:01:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:03.178 15:01:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:03.178 15:01:45 -- host/auth.sh@44 -- # digest=sha512 00:25:03.178 15:01:45 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:03.178 15:01:45 -- host/auth.sh@44 -- # keyid=4 00:25:03.178 15:01:45 -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:25:03.178 15:01:45 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:03.178 15:01:45 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:03.178 15:01:45 -- host/auth.sh@49 -- # echo DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:25:03.178 15:01:45 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:25:03.178 15:01:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:03.178 15:01:45 -- host/auth.sh@68 -- # digest=sha512 00:25:03.178 15:01:45 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:03.178 15:01:45 -- host/auth.sh@68 -- # keyid=4 00:25:03.178 15:01:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:03.178 15:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.178 15:01:45 -- common/autotest_common.sh@10 -- # set +x 00:25:03.520 15:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.520 15:01:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:03.520 15:01:45 -- nvmf/common.sh@717 -- # local ip 00:25:03.520 15:01:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:03.520 15:01:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:03.520 15:01:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.520 15:01:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.520 15:01:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:03.520 15:01:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.520 15:01:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:03.520 15:01:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:03.520 15:01:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:03.520 15:01:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:03.520 15:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.520 15:01:45 -- common/autotest_common.sh@10 -- # set +x 00:25:03.520 nvme0n1 00:25:03.520 15:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.520 15:01:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.520 15:01:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:03.520 15:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.520 15:01:46 -- common/autotest_common.sh@10 -- # set +x 00:25:03.520 15:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.520 15:01:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.520 15:01:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.520 15:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.520 15:01:46 -- common/autotest_common.sh@10 -- # set +x 00:25:03.820 15:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.820 15:01:46 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:03.820 15:01:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:03.820 15:01:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:03.820 15:01:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:03.820 15:01:46 -- host/auth.sh@44 -- # digest=sha512 00:25:03.820 15:01:46 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:03.820 15:01:46 -- host/auth.sh@44 -- # keyid=0 00:25:03.820 15:01:46 -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:25:03.821 15:01:46 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:03.821 15:01:46 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:03.821 15:01:46 -- host/auth.sh@49 -- # echo DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:25:03.821 15:01:46 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:25:03.821 15:01:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:03.821 15:01:46 -- host/auth.sh@68 -- # digest=sha512 00:25:03.821 15:01:46 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:03.821 15:01:46 -- host/auth.sh@68 -- # keyid=0 00:25:03.821 15:01:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:03.821 15:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.821 15:01:46 -- common/autotest_common.sh@10 -- # set +x 00:25:03.821 15:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.821 15:01:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:03.821 15:01:46 -- nvmf/common.sh@717 -- # local ip 00:25:03.821 15:01:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:03.821 15:01:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:03.821 15:01:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.821 15:01:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.821 15:01:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:03.821 15:01:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.821 15:01:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:03.821 15:01:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:03.821 15:01:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:03.821 15:01:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:03.821 15:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.821 15:01:46 -- common/autotest_common.sh@10 -- # set +x 00:25:04.081 nvme0n1 00:25:04.081 15:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.081 15:01:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.081 15:01:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:04.081 15:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.081 15:01:46 -- common/autotest_common.sh@10 -- # set +x 00:25:04.081 15:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.081 15:01:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.081 15:01:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.081 15:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.081 15:01:46 -- common/autotest_common.sh@10 -- # set +x 00:25:04.081 15:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.081 15:01:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:04.081 15:01:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:04.081 15:01:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:04.081 15:01:46 -- host/auth.sh@44 -- # digest=sha512 00:25:04.081 15:01:46 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:04.081 15:01:46 -- host/auth.sh@44 -- # keyid=1 00:25:04.081 15:01:46 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:25:04.081 15:01:46 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:04.081 15:01:46 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:04.081 15:01:46 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:25:04.081 15:01:46 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:25:04.081 15:01:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:04.081 15:01:46 -- host/auth.sh@68 -- # digest=sha512 00:25:04.081 15:01:46 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:04.081 15:01:46 -- host/auth.sh@68 -- # keyid=1 00:25:04.081 15:01:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:04.081 15:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.081 15:01:46 -- common/autotest_common.sh@10 -- # set +x 00:25:04.341 15:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.341 15:01:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:04.341 15:01:46 -- nvmf/common.sh@717 -- # local ip 00:25:04.341 15:01:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:04.341 15:01:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:04.341 15:01:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.341 15:01:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.341 15:01:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:04.341 15:01:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.341 15:01:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:04.342 15:01:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:04.342 15:01:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:04.342 15:01:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:04.342 15:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.342 15:01:46 -- common/autotest_common.sh@10 -- # set +x 00:25:04.602 nvme0n1 00:25:04.602 15:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.602 15:01:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.602 15:01:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:04.602 15:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.602 15:01:47 -- common/autotest_common.sh@10 -- # set +x 00:25:04.602 15:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.602 15:01:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.602 15:01:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.602 15:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.602 15:01:47 -- common/autotest_common.sh@10 -- # set +x 00:25:04.862 15:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.862 15:01:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:04.862 15:01:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:04.862 15:01:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:04.862 15:01:47 -- host/auth.sh@44 -- # digest=sha512 00:25:04.862 15:01:47 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:04.862 15:01:47 -- host/auth.sh@44 -- # keyid=2 00:25:04.862 15:01:47 -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:25:04.862 15:01:47 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:04.862 15:01:47 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:04.862 15:01:47 -- host/auth.sh@49 -- # echo DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:25:04.862 15:01:47 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:25:04.862 15:01:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:04.862 15:01:47 -- host/auth.sh@68 -- # digest=sha512 00:25:04.862 15:01:47 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:04.862 15:01:47 -- host/auth.sh@68 -- # keyid=2 00:25:04.862 15:01:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:04.862 15:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.862 15:01:47 -- common/autotest_common.sh@10 -- # set +x 00:25:04.862 15:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.862 15:01:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:04.862 15:01:47 -- nvmf/common.sh@717 -- # local ip 00:25:04.862 15:01:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:04.862 15:01:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:04.862 15:01:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.862 15:01:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.862 15:01:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:04.862 15:01:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.862 15:01:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:04.862 15:01:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:04.862 15:01:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:04.862 15:01:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:04.862 15:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.862 15:01:47 -- common/autotest_common.sh@10 -- # set +x 00:25:05.122 nvme0n1 00:25:05.122 15:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.122 15:01:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.122 15:01:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:05.122 15:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.122 15:01:47 -- common/autotest_common.sh@10 -- # set +x 00:25:05.122 15:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.382 15:01:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.382 15:01:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.382 15:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.382 15:01:47 -- common/autotest_common.sh@10 -- # set +x 00:25:05.382 15:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.382 15:01:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:05.382 15:01:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:05.382 15:01:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:05.382 15:01:47 -- host/auth.sh@44 -- # digest=sha512 00:25:05.382 15:01:47 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:05.382 15:01:47 -- host/auth.sh@44 -- # keyid=3 00:25:05.382 15:01:47 -- host/auth.sh@45 -- # key=DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:25:05.382 15:01:47 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:05.382 15:01:47 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:05.382 15:01:47 -- host/auth.sh@49 -- # echo DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:25:05.382 15:01:47 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:25:05.382 15:01:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:05.382 15:01:47 -- host/auth.sh@68 -- # digest=sha512 00:25:05.382 15:01:47 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:05.382 15:01:47 -- host/auth.sh@68 -- # keyid=3 00:25:05.382 15:01:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:05.382 15:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.382 15:01:47 -- common/autotest_common.sh@10 -- # set +x 00:25:05.382 15:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.382 15:01:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:05.382 15:01:47 -- nvmf/common.sh@717 -- # local ip 00:25:05.382 15:01:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:05.382 15:01:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:05.382 15:01:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.382 15:01:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.382 15:01:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:05.382 15:01:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.382 15:01:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:05.382 15:01:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:05.382 15:01:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:05.382 15:01:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:05.382 15:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.382 15:01:47 -- common/autotest_common.sh@10 -- # set +x 00:25:05.642 nvme0n1 00:25:05.642 15:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.902 15:01:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.902 15:01:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:05.902 15:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.902 15:01:48 -- common/autotest_common.sh@10 -- # set +x 00:25:05.902 15:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.902 15:01:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.902 15:01:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.902 15:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.902 15:01:48 -- common/autotest_common.sh@10 -- # set +x 00:25:05.902 15:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.902 15:01:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:05.902 15:01:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:05.902 15:01:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:05.902 15:01:48 -- host/auth.sh@44 -- # digest=sha512 00:25:05.902 15:01:48 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:05.902 15:01:48 -- host/auth.sh@44 -- # keyid=4 00:25:05.902 15:01:48 -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:25:05.902 15:01:48 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:05.902 15:01:48 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:05.902 15:01:48 -- host/auth.sh@49 -- # echo DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:25:05.902 15:01:48 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:25:05.902 15:01:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:05.902 15:01:48 -- host/auth.sh@68 -- # digest=sha512 00:25:05.902 15:01:48 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:05.902 15:01:48 -- host/auth.sh@68 -- # keyid=4 00:25:05.902 15:01:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:05.902 15:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.902 15:01:48 -- common/autotest_common.sh@10 -- # set +x 00:25:05.902 15:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.902 15:01:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:05.902 15:01:48 -- nvmf/common.sh@717 -- # local ip 00:25:05.902 15:01:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:05.902 15:01:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:05.902 15:01:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.902 15:01:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.902 15:01:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:05.902 15:01:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.902 15:01:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:05.902 15:01:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:05.902 15:01:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:05.903 15:01:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:05.903 15:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.903 15:01:48 -- common/autotest_common.sh@10 -- # set +x 00:25:06.474 nvme0n1 00:25:06.474 15:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.474 15:01:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.474 15:01:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:06.474 15:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.474 15:01:48 -- common/autotest_common.sh@10 -- # set +x 00:25:06.474 15:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.474 15:01:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.474 15:01:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.474 15:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.474 15:01:48 -- common/autotest_common.sh@10 -- # set +x 00:25:06.474 15:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.474 15:01:48 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:06.474 15:01:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:06.474 15:01:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:06.474 15:01:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:06.474 15:01:48 -- host/auth.sh@44 -- # digest=sha512 00:25:06.474 15:01:48 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:06.474 15:01:48 -- host/auth.sh@44 -- # keyid=0 00:25:06.474 15:01:48 -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:25:06.474 15:01:48 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:06.474 15:01:48 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:06.474 15:01:48 -- host/auth.sh@49 -- # echo DHHC-1:00:MzcyNDg2MjY3YzczMjQwMmFiMmQ3ZjBhYWUxZTdhYzJRe7y+: 00:25:06.474 15:01:48 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:25:06.474 15:01:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:06.474 15:01:48 -- host/auth.sh@68 -- # digest=sha512 00:25:06.474 15:01:48 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:06.474 15:01:48 -- host/auth.sh@68 -- # keyid=0 00:25:06.474 15:01:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:06.474 15:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.474 15:01:48 -- common/autotest_common.sh@10 -- # set +x 00:25:06.474 15:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.474 15:01:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:06.474 15:01:48 -- nvmf/common.sh@717 -- # local ip 00:25:06.474 15:01:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:06.474 15:01:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:06.474 15:01:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.474 15:01:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.474 15:01:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:06.474 15:01:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.474 15:01:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:06.474 15:01:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:06.474 15:01:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:06.474 15:01:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:06.474 15:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.474 15:01:48 -- common/autotest_common.sh@10 -- # set +x 00:25:07.044 nvme0n1 00:25:07.044 15:01:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.044 15:01:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.044 15:01:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:07.044 15:01:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.044 15:01:49 -- common/autotest_common.sh@10 -- # set +x 00:25:07.044 15:01:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.304 15:01:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.304 15:01:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.305 15:01:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.305 15:01:49 -- common/autotest_common.sh@10 -- # set +x 00:25:07.305 15:01:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.305 15:01:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:07.305 15:01:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:07.305 15:01:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:07.305 15:01:49 -- host/auth.sh@44 -- # digest=sha512 00:25:07.305 15:01:49 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:07.305 15:01:49 -- host/auth.sh@44 -- # keyid=1 00:25:07.305 15:01:49 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:25:07.305 15:01:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:07.305 15:01:49 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:07.305 15:01:49 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:25:07.305 15:01:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:25:07.305 15:01:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:07.305 15:01:49 -- host/auth.sh@68 -- # digest=sha512 00:25:07.305 15:01:49 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:07.305 15:01:49 -- host/auth.sh@68 -- # keyid=1 00:25:07.305 15:01:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:07.305 15:01:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.305 15:01:49 -- common/autotest_common.sh@10 -- # set +x 00:25:07.305 15:01:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.305 15:01:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:07.305 15:01:49 -- nvmf/common.sh@717 -- # local ip 00:25:07.305 15:01:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:07.305 15:01:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:07.305 15:01:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.305 15:01:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.305 15:01:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:07.305 15:01:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.305 15:01:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:07.305 15:01:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:07.305 15:01:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:07.305 15:01:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:07.305 15:01:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.305 15:01:49 -- common/autotest_common.sh@10 -- # set +x 00:25:07.876 nvme0n1 00:25:07.876 15:01:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.876 15:01:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.876 15:01:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:07.876 15:01:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.876 15:01:50 -- common/autotest_common.sh@10 -- # set +x 00:25:07.876 15:01:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.138 15:01:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.138 15:01:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.138 15:01:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.138 15:01:50 -- common/autotest_common.sh@10 -- # set +x 00:25:08.138 15:01:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.138 15:01:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:08.138 15:01:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:08.138 15:01:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:08.138 15:01:50 -- host/auth.sh@44 -- # digest=sha512 00:25:08.138 15:01:50 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:08.138 15:01:50 -- host/auth.sh@44 -- # keyid=2 00:25:08.138 15:01:50 -- host/auth.sh@45 -- # key=DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:25:08.138 15:01:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:08.138 15:01:50 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:08.138 15:01:50 -- host/auth.sh@49 -- # echo DHHC-1:01:YjhjZGUzMDQ2MjI3ZGY3MzkwMTUxMDkyYmM2YmFkM2RxKq8f: 00:25:08.138 15:01:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:25:08.138 15:01:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:08.138 15:01:50 -- host/auth.sh@68 -- # digest=sha512 00:25:08.138 15:01:50 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:08.138 15:01:50 -- host/auth.sh@68 -- # keyid=2 00:25:08.138 15:01:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:08.138 15:01:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.138 15:01:50 -- common/autotest_common.sh@10 -- # set +x 00:25:08.138 15:01:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.138 15:01:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:08.138 15:01:50 -- nvmf/common.sh@717 -- # local ip 00:25:08.138 15:01:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:08.138 15:01:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:08.138 15:01:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.138 15:01:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.138 15:01:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:08.138 15:01:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.138 15:01:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:08.138 15:01:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:08.138 15:01:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:08.138 15:01:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:08.138 15:01:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.138 15:01:50 -- common/autotest_common.sh@10 -- # set +x 00:25:08.711 nvme0n1 00:25:08.711 15:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.711 15:01:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.711 15:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.711 15:01:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:08.711 15:01:51 -- common/autotest_common.sh@10 -- # set +x 00:25:08.711 15:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.972 15:01:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.972 15:01:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.972 15:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.972 15:01:51 -- common/autotest_common.sh@10 -- # set +x 00:25:08.972 15:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.972 15:01:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:08.972 15:01:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:08.972 15:01:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:08.972 15:01:51 -- host/auth.sh@44 -- # digest=sha512 00:25:08.972 15:01:51 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:08.972 15:01:51 -- host/auth.sh@44 -- # keyid=3 00:25:08.972 15:01:51 -- host/auth.sh@45 -- # key=DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:25:08.972 15:01:51 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:08.972 15:01:51 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:08.972 15:01:51 -- host/auth.sh@49 -- # echo DHHC-1:02:ODc0M2ZkMTM5MTBhY2FjNDMwMGY4NzNjN2JhZDQ5MGU0NzI0ZDQ1ZTJkYzk1N2M3K7SeJQ==: 00:25:08.972 15:01:51 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:25:08.972 15:01:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:08.972 15:01:51 -- host/auth.sh@68 -- # digest=sha512 00:25:08.972 15:01:51 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:08.972 15:01:51 -- host/auth.sh@68 -- # keyid=3 00:25:08.972 15:01:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:08.972 15:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.972 15:01:51 -- common/autotest_common.sh@10 -- # set +x 00:25:08.972 15:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.972 15:01:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:08.972 15:01:51 -- nvmf/common.sh@717 -- # local ip 00:25:08.972 15:01:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:08.972 15:01:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:08.972 15:01:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.972 15:01:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.972 15:01:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:08.972 15:01:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.972 15:01:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:08.972 15:01:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:08.972 15:01:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:08.972 15:01:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:08.972 15:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.972 15:01:51 -- common/autotest_common.sh@10 -- # set +x 00:25:09.544 nvme0n1 00:25:09.544 15:01:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:09.544 15:01:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.544 15:01:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:09.544 15:01:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:09.544 15:01:52 -- common/autotest_common.sh@10 -- # set +x 00:25:09.544 15:01:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:09.806 15:01:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.806 15:01:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.806 15:01:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:09.806 15:01:52 -- common/autotest_common.sh@10 -- # set +x 00:25:09.806 15:01:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:09.806 15:01:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:09.806 15:01:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:09.806 15:01:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:09.806 15:01:52 -- host/auth.sh@44 -- # digest=sha512 00:25:09.806 15:01:52 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:09.806 15:01:52 -- host/auth.sh@44 -- # keyid=4 00:25:09.806 15:01:52 -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:25:09.806 15:01:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:09.806 15:01:52 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:09.806 15:01:52 -- host/auth.sh@49 -- # echo DHHC-1:03:NzViNjMzN2UxZWI3MjNlMjZlMzlkNWFhOTMzMWQyODRkZDZlNzM1ZDUwMzFlZTVmOTExMjJhYjQ1NjkyYmNkM2vmBdA=: 00:25:09.806 15:01:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:25:09.806 15:01:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:09.806 15:01:52 -- host/auth.sh@68 -- # digest=sha512 00:25:09.806 15:01:52 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:09.806 15:01:52 -- host/auth.sh@68 -- # keyid=4 00:25:09.806 15:01:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:09.806 15:01:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:09.806 15:01:52 -- common/autotest_common.sh@10 -- # set +x 00:25:09.806 15:01:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:09.806 15:01:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:09.806 15:01:52 -- nvmf/common.sh@717 -- # local ip 00:25:09.806 15:01:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:09.806 15:01:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:09.806 15:01:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.806 15:01:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.806 15:01:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:09.806 15:01:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.806 15:01:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:09.806 15:01:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:09.806 15:01:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:09.806 15:01:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:09.806 15:01:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:09.806 15:01:52 -- common/autotest_common.sh@10 -- # set +x 00:25:10.377 nvme0n1 00:25:10.377 15:01:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.377 15:01:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.378 15:01:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.378 15:01:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:10.378 15:01:52 -- common/autotest_common.sh@10 -- # set +x 00:25:10.378 15:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.378 15:01:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.378 15:01:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.639 15:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.639 15:01:53 -- common/autotest_common.sh@10 -- # set +x 00:25:10.639 15:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.639 15:01:53 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:10.639 15:01:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:10.639 15:01:53 -- host/auth.sh@44 -- # digest=sha256 00:25:10.639 15:01:53 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:10.639 15:01:53 -- host/auth.sh@44 -- # keyid=1 00:25:10.639 15:01:53 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:25:10.639 15:01:53 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:10.639 15:01:53 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:10.639 15:01:53 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmEyMWUwZDNiZjJiOGNjNDY4NGRiOTc5YTRjNWU2NTU0NDhmOTQ0NmJkNWI3ZjkwnR/Akw==: 00:25:10.639 15:01:53 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:10.639 15:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.639 15:01:53 -- common/autotest_common.sh@10 -- # set +x 00:25:10.639 15:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.639 15:01:53 -- host/auth.sh@119 -- # get_main_ns_ip 00:25:10.639 15:01:53 -- nvmf/common.sh@717 -- # local ip 00:25:10.639 15:01:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:10.639 15:01:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:10.639 15:01:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.639 15:01:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.639 15:01:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:10.639 15:01:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.639 15:01:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:10.639 15:01:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:10.639 15:01:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:10.639 15:01:53 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:10.639 15:01:53 -- common/autotest_common.sh@638 -- # local es=0 00:25:10.639 15:01:53 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:10.639 15:01:53 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:10.639 15:01:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:10.639 15:01:53 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:10.639 15:01:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:10.639 15:01:53 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:10.639 15:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.639 15:01:53 -- common/autotest_common.sh@10 -- # set +x 00:25:10.639 request: 00:25:10.639 { 00:25:10.639 "name": "nvme0", 00:25:10.639 "trtype": "tcp", 00:25:10.639 "traddr": "10.0.0.1", 00:25:10.639 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:10.639 "adrfam": "ipv4", 00:25:10.639 "trsvcid": "4420", 00:25:10.639 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:10.639 "method": "bdev_nvme_attach_controller", 00:25:10.639 "req_id": 1 00:25:10.639 } 00:25:10.639 Got JSON-RPC error response 00:25:10.639 response: 00:25:10.639 { 00:25:10.639 "code": -32602, 00:25:10.639 "message": "Invalid parameters" 00:25:10.639 } 00:25:10.639 15:01:53 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:10.639 15:01:53 -- common/autotest_common.sh@641 -- # es=1 00:25:10.639 15:01:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:10.639 15:01:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:10.639 15:01:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:10.639 15:01:53 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.639 15:01:53 -- host/auth.sh@121 -- # jq length 00:25:10.639 15:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.639 15:01:53 -- common/autotest_common.sh@10 -- # set +x 00:25:10.639 15:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.639 15:01:53 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:25:10.639 15:01:53 -- host/auth.sh@124 -- # get_main_ns_ip 00:25:10.639 15:01:53 -- nvmf/common.sh@717 -- # local ip 00:25:10.639 15:01:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:10.639 15:01:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:10.639 15:01:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.639 15:01:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.639 15:01:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:10.639 15:01:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.639 15:01:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:10.639 15:01:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:10.639 15:01:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:10.639 15:01:53 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:10.639 15:01:53 -- common/autotest_common.sh@638 -- # local es=0 00:25:10.639 15:01:53 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:10.639 15:01:53 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:10.639 15:01:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:10.639 15:01:53 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:10.639 15:01:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:10.639 15:01:53 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:10.639 15:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.639 15:01:53 -- common/autotest_common.sh@10 -- # set +x 00:25:10.639 request: 00:25:10.639 { 00:25:10.639 "name": "nvme0", 00:25:10.639 "trtype": "tcp", 00:25:10.639 "traddr": "10.0.0.1", 00:25:10.639 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:10.639 "adrfam": "ipv4", 00:25:10.639 "trsvcid": "4420", 00:25:10.639 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:10.639 "dhchap_key": "key2", 00:25:10.639 "method": "bdev_nvme_attach_controller", 00:25:10.639 "req_id": 1 00:25:10.639 } 00:25:10.639 Got JSON-RPC error response 00:25:10.639 response: 00:25:10.639 { 00:25:10.639 "code": -32602, 00:25:10.639 "message": "Invalid parameters" 00:25:10.639 } 00:25:10.639 15:01:53 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:10.639 15:01:53 -- common/autotest_common.sh@641 -- # es=1 00:25:10.639 15:01:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:10.639 15:01:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:10.640 15:01:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:10.640 15:01:53 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.640 15:01:53 -- host/auth.sh@127 -- # jq length 00:25:10.640 15:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.640 15:01:53 -- common/autotest_common.sh@10 -- # set +x 00:25:10.640 15:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.900 15:01:53 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:25:10.900 15:01:53 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:25:10.900 15:01:53 -- host/auth.sh@130 -- # cleanup 00:25:10.900 15:01:53 -- host/auth.sh@24 -- # nvmftestfini 00:25:10.900 15:01:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:10.900 15:01:53 -- nvmf/common.sh@117 -- # sync 00:25:10.900 15:01:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:10.900 15:01:53 -- nvmf/common.sh@120 -- # set +e 00:25:10.900 15:01:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:10.900 15:01:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:10.900 rmmod nvme_tcp 00:25:10.900 rmmod nvme_fabrics 00:25:10.900 15:01:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:10.900 15:01:53 -- nvmf/common.sh@124 -- # set -e 00:25:10.900 15:01:53 -- nvmf/common.sh@125 -- # return 0 00:25:10.900 15:01:53 -- nvmf/common.sh@478 -- # '[' -n 1195278 ']' 00:25:10.900 15:01:53 -- nvmf/common.sh@479 -- # killprocess 1195278 00:25:10.900 15:01:53 -- common/autotest_common.sh@936 -- # '[' -z 1195278 ']' 00:25:10.901 15:01:53 -- common/autotest_common.sh@940 -- # kill -0 1195278 00:25:10.901 15:01:53 -- common/autotest_common.sh@941 -- # uname 00:25:10.901 15:01:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:10.901 15:01:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1195278 00:25:10.901 15:01:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:10.901 15:01:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:10.901 15:01:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1195278' 00:25:10.901 killing process with pid 1195278 00:25:10.901 15:01:53 -- common/autotest_common.sh@955 -- # kill 1195278 00:25:10.901 15:01:53 -- common/autotest_common.sh@960 -- # wait 1195278 00:25:10.901 15:01:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:10.901 15:01:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:10.901 15:01:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:10.901 15:01:53 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:10.901 15:01:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:10.901 15:01:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.901 15:01:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:10.901 15:01:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.447 15:01:55 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:13.447 15:01:55 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:13.447 15:01:55 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:13.447 15:01:55 -- host/auth.sh@27 -- # clean_kernel_target 00:25:13.447 15:01:55 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:13.447 15:01:55 -- nvmf/common.sh@675 -- # echo 0 00:25:13.447 15:01:55 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:13.447 15:01:55 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:13.447 15:01:55 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:13.447 15:01:55 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:13.447 15:01:55 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:25:13.447 15:01:55 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:25:13.447 15:01:55 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:16.752 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:25:16.752 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:25:16.752 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:25:16.752 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:25:16.752 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:25:16.752 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:25:16.752 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:25:16.752 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:25:16.752 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:25:16.752 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:25:16.752 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:25:16.752 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:25:16.752 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:25:16.752 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:25:16.752 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:25:16.752 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:25:16.752 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:25:17.323 15:01:59 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.APu /tmp/spdk.key-null.XBG /tmp/spdk.key-sha256.eRY /tmp/spdk.key-sha384.uAM /tmp/spdk.key-sha512.YBx /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:17.323 15:01:59 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:20.620 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:25:20.620 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:25:20.620 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:25:20.620 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:25:20.620 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:25:20.620 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:25:20.620 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:25:20.620 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:25:20.620 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:25:20.620 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:25:20.620 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:25:20.620 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:25:20.620 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:25:20.620 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:25:20.620 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:25:20.620 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:25:20.620 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:25:20.880 00:25:20.880 real 0m58.263s 00:25:20.880 user 0m51.813s 00:25:20.880 sys 0m15.067s 00:25:20.880 15:02:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:20.880 15:02:03 -- common/autotest_common.sh@10 -- # set +x 00:25:20.880 ************************************ 00:25:20.880 END TEST nvmf_auth 00:25:20.880 ************************************ 00:25:20.880 15:02:03 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:25:20.880 15:02:03 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:20.880 15:02:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:20.880 15:02:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:20.880 15:02:03 -- common/autotest_common.sh@10 -- # set +x 00:25:21.142 ************************************ 00:25:21.142 START TEST nvmf_digest 00:25:21.142 ************************************ 00:25:21.142 15:02:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:21.142 * Looking for test storage... 00:25:21.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:21.142 15:02:03 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:21.142 15:02:03 -- nvmf/common.sh@7 -- # uname -s 00:25:21.142 15:02:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.142 15:02:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.142 15:02:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.142 15:02:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.142 15:02:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.142 15:02:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.142 15:02:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.142 15:02:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.142 15:02:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.142 15:02:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.142 15:02:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:21.142 15:02:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:21.142 15:02:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.142 15:02:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.142 15:02:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:21.142 15:02:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:21.142 15:02:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:21.142 15:02:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.142 15:02:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.142 15:02:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.142 15:02:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.142 15:02:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.142 15:02:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.142 15:02:03 -- paths/export.sh@5 -- # export PATH 00:25:21.142 15:02:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.142 15:02:03 -- nvmf/common.sh@47 -- # : 0 00:25:21.142 15:02:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:21.142 15:02:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:21.142 15:02:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:21.142 15:02:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.142 15:02:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.142 15:02:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:21.142 15:02:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:21.142 15:02:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:21.142 15:02:03 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:21.142 15:02:03 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:21.142 15:02:03 -- host/digest.sh@16 -- # runtime=2 00:25:21.142 15:02:03 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:21.142 15:02:03 -- host/digest.sh@138 -- # nvmftestinit 00:25:21.142 15:02:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:21.142 15:02:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.142 15:02:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:21.142 15:02:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:21.142 15:02:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:21.142 15:02:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.142 15:02:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:21.142 15:02:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.142 15:02:03 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:21.142 15:02:03 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:21.142 15:02:03 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:21.142 15:02:03 -- common/autotest_common.sh@10 -- # set +x 00:25:29.286 15:02:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:29.286 15:02:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:29.286 15:02:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:29.286 15:02:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:29.286 15:02:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:29.286 15:02:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:29.286 15:02:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:29.286 15:02:10 -- nvmf/common.sh@295 -- # net_devs=() 00:25:29.286 15:02:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:29.286 15:02:10 -- nvmf/common.sh@296 -- # e810=() 00:25:29.286 15:02:10 -- nvmf/common.sh@296 -- # local -ga e810 00:25:29.286 15:02:10 -- nvmf/common.sh@297 -- # x722=() 00:25:29.286 15:02:10 -- nvmf/common.sh@297 -- # local -ga x722 00:25:29.286 15:02:10 -- nvmf/common.sh@298 -- # mlx=() 00:25:29.286 15:02:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:29.286 15:02:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:29.286 15:02:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:29.286 15:02:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:29.286 15:02:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:29.286 15:02:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:29.286 15:02:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:29.286 15:02:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:29.286 15:02:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:29.286 15:02:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:29.286 15:02:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:29.286 15:02:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:29.286 15:02:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:29.286 15:02:10 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:29.286 15:02:10 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:29.286 15:02:10 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:29.286 15:02:10 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:29.286 15:02:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:29.286 15:02:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:29.286 15:02:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:29.286 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:29.286 15:02:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:29.286 15:02:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:29.286 15:02:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.286 15:02:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.286 15:02:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:29.286 15:02:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:29.286 15:02:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:29.286 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:29.286 15:02:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:29.286 15:02:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:29.286 15:02:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.286 15:02:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.286 15:02:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:29.286 15:02:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:29.286 15:02:10 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:29.286 15:02:10 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:29.286 15:02:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:29.286 15:02:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.286 15:02:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:29.286 15:02:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.286 15:02:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:29.286 Found net devices under 0000:31:00.0: cvl_0_0 00:25:29.286 15:02:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.286 15:02:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:29.286 15:02:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.286 15:02:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:29.286 15:02:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.286 15:02:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:29.286 Found net devices under 0000:31:00.1: cvl_0_1 00:25:29.286 15:02:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.286 15:02:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:29.286 15:02:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:29.286 15:02:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:29.286 15:02:10 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:29.286 15:02:10 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:29.286 15:02:10 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:29.286 15:02:10 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:29.286 15:02:10 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:29.286 15:02:10 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:29.286 15:02:10 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:29.286 15:02:10 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:29.286 15:02:10 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:29.286 15:02:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:29.286 15:02:10 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:29.286 15:02:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:29.286 15:02:10 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:29.286 15:02:10 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:29.286 15:02:10 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:29.286 15:02:10 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:29.286 15:02:10 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:29.286 15:02:10 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:29.287 15:02:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:29.287 15:02:10 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:29.287 15:02:10 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:29.287 15:02:10 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:29.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:29.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:25:29.287 00:25:29.287 --- 10.0.0.2 ping statistics --- 00:25:29.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.287 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:25:29.287 15:02:10 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:29.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:29.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:25:29.287 00:25:29.287 --- 10.0.0.1 ping statistics --- 00:25:29.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.287 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:25:29.287 15:02:10 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:29.287 15:02:10 -- nvmf/common.sh@411 -- # return 0 00:25:29.287 15:02:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:29.287 15:02:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:29.287 15:02:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:29.287 15:02:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:29.287 15:02:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:29.287 15:02:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:29.287 15:02:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:29.287 15:02:10 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:29.287 15:02:10 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:29.287 15:02:10 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:29.287 15:02:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:29.287 15:02:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:29.287 15:02:10 -- common/autotest_common.sh@10 -- # set +x 00:25:29.287 ************************************ 00:25:29.287 START TEST nvmf_digest_clean 00:25:29.287 ************************************ 00:25:29.287 15:02:11 -- common/autotest_common.sh@1111 -- # run_digest 00:25:29.287 15:02:11 -- host/digest.sh@120 -- # local dsa_initiator 00:25:29.287 15:02:11 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:29.287 15:02:11 -- host/digest.sh@121 -- # dsa_initiator=false 00:25:29.287 15:02:11 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:29.287 15:02:11 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:29.287 15:02:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:29.287 15:02:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:29.287 15:02:11 -- common/autotest_common.sh@10 -- # set +x 00:25:29.287 15:02:11 -- nvmf/common.sh@470 -- # nvmfpid=1212042 00:25:29.287 15:02:11 -- nvmf/common.sh@471 -- # waitforlisten 1212042 00:25:29.287 15:02:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:29.287 15:02:11 -- common/autotest_common.sh@817 -- # '[' -z 1212042 ']' 00:25:29.287 15:02:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.287 15:02:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:29.287 15:02:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.287 15:02:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:29.287 15:02:11 -- common/autotest_common.sh@10 -- # set +x 00:25:29.287 [2024-04-26 15:02:11.205682] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:25:29.287 [2024-04-26 15:02:11.205739] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:29.287 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.287 [2024-04-26 15:02:11.278812] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.287 [2024-04-26 15:02:11.351048] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:29.287 [2024-04-26 15:02:11.351086] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:29.287 [2024-04-26 15:02:11.351093] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:29.287 [2024-04-26 15:02:11.351099] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:29.287 [2024-04-26 15:02:11.351105] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:29.287 [2024-04-26 15:02:11.351125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.547 15:02:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:29.547 15:02:11 -- common/autotest_common.sh@850 -- # return 0 00:25:29.547 15:02:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:29.547 15:02:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:29.547 15:02:11 -- common/autotest_common.sh@10 -- # set +x 00:25:29.547 15:02:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.547 15:02:12 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:29.547 15:02:12 -- host/digest.sh@126 -- # common_target_config 00:25:29.547 15:02:12 -- host/digest.sh@43 -- # rpc_cmd 00:25:29.547 15:02:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.547 15:02:12 -- common/autotest_common.sh@10 -- # set +x 00:25:29.547 null0 00:25:29.547 [2024-04-26 15:02:12.101701] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:29.547 [2024-04-26 15:02:12.125888] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:29.547 15:02:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.547 15:02:12 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:29.547 15:02:12 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:29.547 15:02:12 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:29.547 15:02:12 -- host/digest.sh@80 -- # rw=randread 00:25:29.547 15:02:12 -- host/digest.sh@80 -- # bs=4096 00:25:29.547 15:02:12 -- host/digest.sh@80 -- # qd=128 00:25:29.547 15:02:12 -- host/digest.sh@80 -- # scan_dsa=false 00:25:29.547 15:02:12 -- host/digest.sh@83 -- # bperfpid=1212090 00:25:29.547 15:02:12 -- host/digest.sh@84 -- # waitforlisten 1212090 /var/tmp/bperf.sock 00:25:29.547 15:02:12 -- common/autotest_common.sh@817 -- # '[' -z 1212090 ']' 00:25:29.547 15:02:12 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:29.547 15:02:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:29.547 15:02:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:29.547 15:02:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:29.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:29.547 15:02:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:29.547 15:02:12 -- common/autotest_common.sh@10 -- # set +x 00:25:29.547 [2024-04-26 15:02:12.178518] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:25:29.547 [2024-04-26 15:02:12.178563] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1212090 ] 00:25:29.547 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.806 [2024-04-26 15:02:12.255032] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.806 [2024-04-26 15:02:12.317985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.379 15:02:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:30.379 15:02:12 -- common/autotest_common.sh@850 -- # return 0 00:25:30.379 15:02:12 -- host/digest.sh@86 -- # false 00:25:30.379 15:02:12 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:30.379 15:02:12 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:30.639 15:02:13 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:30.639 15:02:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:30.899 nvme0n1 00:25:31.158 15:02:13 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:31.158 15:02:13 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:31.158 Running I/O for 2 seconds... 00:25:33.068 00:25:33.068 Latency(us) 00:25:33.068 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.068 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:33.068 nvme0n1 : 2.00 19798.56 77.34 0.00 0.00 6457.32 2170.88 19660.80 00:25:33.068 =================================================================================================================== 00:25:33.068 Total : 19798.56 77.34 0.00 0.00 6457.32 2170.88 19660.80 00:25:33.068 0 00:25:33.068 15:02:15 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:33.068 15:02:15 -- host/digest.sh@93 -- # get_accel_stats 00:25:33.068 15:02:15 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:33.068 15:02:15 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:33.068 | select(.opcode=="crc32c") 00:25:33.068 | "\(.module_name) \(.executed)"' 00:25:33.068 15:02:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:33.329 15:02:15 -- host/digest.sh@94 -- # false 00:25:33.329 15:02:15 -- host/digest.sh@94 -- # exp_module=software 00:25:33.329 15:02:15 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:33.329 15:02:15 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:33.329 15:02:15 -- host/digest.sh@98 -- # killprocess 1212090 00:25:33.329 15:02:15 -- common/autotest_common.sh@936 -- # '[' -z 1212090 ']' 00:25:33.329 15:02:15 -- common/autotest_common.sh@940 -- # kill -0 1212090 00:25:33.329 15:02:15 -- common/autotest_common.sh@941 -- # uname 00:25:33.329 15:02:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:33.329 15:02:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1212090 00:25:33.329 15:02:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:33.329 15:02:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:33.329 15:02:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1212090' 00:25:33.329 killing process with pid 1212090 00:25:33.329 15:02:15 -- common/autotest_common.sh@955 -- # kill 1212090 00:25:33.329 Received shutdown signal, test time was about 2.000000 seconds 00:25:33.329 00:25:33.329 Latency(us) 00:25:33.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.329 =================================================================================================================== 00:25:33.329 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:33.329 15:02:15 -- common/autotest_common.sh@960 -- # wait 1212090 00:25:33.589 15:02:16 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:33.589 15:02:16 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:33.589 15:02:16 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:33.589 15:02:16 -- host/digest.sh@80 -- # rw=randread 00:25:33.589 15:02:16 -- host/digest.sh@80 -- # bs=131072 00:25:33.589 15:02:16 -- host/digest.sh@80 -- # qd=16 00:25:33.589 15:02:16 -- host/digest.sh@80 -- # scan_dsa=false 00:25:33.589 15:02:16 -- host/digest.sh@83 -- # bperfpid=1212935 00:25:33.589 15:02:16 -- host/digest.sh@84 -- # waitforlisten 1212935 /var/tmp/bperf.sock 00:25:33.589 15:02:16 -- common/autotest_common.sh@817 -- # '[' -z 1212935 ']' 00:25:33.589 15:02:16 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:33.589 15:02:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:33.589 15:02:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:33.589 15:02:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:33.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:33.589 15:02:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:33.589 15:02:16 -- common/autotest_common.sh@10 -- # set +x 00:25:33.589 [2024-04-26 15:02:16.054127] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:25:33.589 [2024-04-26 15:02:16.054187] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1212935 ] 00:25:33.589 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:33.589 Zero copy mechanism will not be used. 00:25:33.589 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.589 [2024-04-26 15:02:16.128286] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.589 [2024-04-26 15:02:16.180140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.530 15:02:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:34.530 15:02:16 -- common/autotest_common.sh@850 -- # return 0 00:25:34.530 15:02:16 -- host/digest.sh@86 -- # false 00:25:34.530 15:02:16 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:34.530 15:02:16 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:34.530 15:02:17 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:34.530 15:02:17 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:34.790 nvme0n1 00:25:34.790 15:02:17 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:34.790 15:02:17 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:34.790 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:34.790 Zero copy mechanism will not be used. 00:25:34.790 Running I/O for 2 seconds... 00:25:37.335 00:25:37.335 Latency(us) 00:25:37.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:37.335 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:37.335 nvme0n1 : 2.00 3336.76 417.09 0.00 0.00 4792.61 716.80 7591.25 00:25:37.335 =================================================================================================================== 00:25:37.335 Total : 3336.76 417.09 0.00 0.00 4792.61 716.80 7591.25 00:25:37.335 0 00:25:37.335 15:02:19 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:37.335 15:02:19 -- host/digest.sh@93 -- # get_accel_stats 00:25:37.335 15:02:19 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:37.335 15:02:19 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:37.335 | select(.opcode=="crc32c") 00:25:37.335 | "\(.module_name) \(.executed)"' 00:25:37.335 15:02:19 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:37.335 15:02:19 -- host/digest.sh@94 -- # false 00:25:37.335 15:02:19 -- host/digest.sh@94 -- # exp_module=software 00:25:37.335 15:02:19 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:37.335 15:02:19 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:37.335 15:02:19 -- host/digest.sh@98 -- # killprocess 1212935 00:25:37.335 15:02:19 -- common/autotest_common.sh@936 -- # '[' -z 1212935 ']' 00:25:37.335 15:02:19 -- common/autotest_common.sh@940 -- # kill -0 1212935 00:25:37.335 15:02:19 -- common/autotest_common.sh@941 -- # uname 00:25:37.335 15:02:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:37.335 15:02:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1212935 00:25:37.335 15:02:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:37.335 15:02:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:37.335 15:02:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1212935' 00:25:37.335 killing process with pid 1212935 00:25:37.335 15:02:19 -- common/autotest_common.sh@955 -- # kill 1212935 00:25:37.335 Received shutdown signal, test time was about 2.000000 seconds 00:25:37.335 00:25:37.335 Latency(us) 00:25:37.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:37.335 =================================================================================================================== 00:25:37.335 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:37.335 15:02:19 -- common/autotest_common.sh@960 -- # wait 1212935 00:25:37.335 15:02:19 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:37.335 15:02:19 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:37.335 15:02:19 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:37.335 15:02:19 -- host/digest.sh@80 -- # rw=randwrite 00:25:37.335 15:02:19 -- host/digest.sh@80 -- # bs=4096 00:25:37.335 15:02:19 -- host/digest.sh@80 -- # qd=128 00:25:37.335 15:02:19 -- host/digest.sh@80 -- # scan_dsa=false 00:25:37.335 15:02:19 -- host/digest.sh@83 -- # bperfpid=1213748 00:25:37.335 15:02:19 -- host/digest.sh@84 -- # waitforlisten 1213748 /var/tmp/bperf.sock 00:25:37.335 15:02:19 -- common/autotest_common.sh@817 -- # '[' -z 1213748 ']' 00:25:37.335 15:02:19 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:37.335 15:02:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:37.335 15:02:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:37.335 15:02:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:37.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:37.335 15:02:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:37.335 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:25:37.335 [2024-04-26 15:02:19.813029] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:25:37.335 [2024-04-26 15:02:19.813085] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1213748 ] 00:25:37.335 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.335 [2024-04-26 15:02:19.887960] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.335 [2024-04-26 15:02:19.939057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.274 15:02:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:38.275 15:02:20 -- common/autotest_common.sh@850 -- # return 0 00:25:38.275 15:02:20 -- host/digest.sh@86 -- # false 00:25:38.275 15:02:20 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:38.275 15:02:20 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:38.275 15:02:20 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:38.275 15:02:20 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:38.535 nvme0n1 00:25:38.535 15:02:21 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:38.535 15:02:21 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:38.796 Running I/O for 2 seconds... 00:25:40.710 00:25:40.710 Latency(us) 00:25:40.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:40.710 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:40.710 nvme0n1 : 2.01 21261.70 83.05 0.00 0.00 6012.09 3768.32 11851.09 00:25:40.710 =================================================================================================================== 00:25:40.710 Total : 21261.70 83.05 0.00 0.00 6012.09 3768.32 11851.09 00:25:40.710 0 00:25:40.710 15:02:23 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:40.710 15:02:23 -- host/digest.sh@93 -- # get_accel_stats 00:25:40.710 15:02:23 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:40.710 15:02:23 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:40.710 | select(.opcode=="crc32c") 00:25:40.710 | "\(.module_name) \(.executed)"' 00:25:40.710 15:02:23 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:41.015 15:02:23 -- host/digest.sh@94 -- # false 00:25:41.015 15:02:23 -- host/digest.sh@94 -- # exp_module=software 00:25:41.015 15:02:23 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:41.015 15:02:23 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:41.015 15:02:23 -- host/digest.sh@98 -- # killprocess 1213748 00:25:41.015 15:02:23 -- common/autotest_common.sh@936 -- # '[' -z 1213748 ']' 00:25:41.015 15:02:23 -- common/autotest_common.sh@940 -- # kill -0 1213748 00:25:41.015 15:02:23 -- common/autotest_common.sh@941 -- # uname 00:25:41.015 15:02:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:41.015 15:02:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1213748 00:25:41.015 15:02:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:41.015 15:02:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:41.015 15:02:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1213748' 00:25:41.015 killing process with pid 1213748 00:25:41.015 15:02:23 -- common/autotest_common.sh@955 -- # kill 1213748 00:25:41.015 Received shutdown signal, test time was about 2.000000 seconds 00:25:41.015 00:25:41.015 Latency(us) 00:25:41.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.015 =================================================================================================================== 00:25:41.015 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:41.015 15:02:23 -- common/autotest_common.sh@960 -- # wait 1213748 00:25:41.015 15:02:23 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:41.015 15:02:23 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:41.015 15:02:23 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:41.015 15:02:23 -- host/digest.sh@80 -- # rw=randwrite 00:25:41.015 15:02:23 -- host/digest.sh@80 -- # bs=131072 00:25:41.015 15:02:23 -- host/digest.sh@80 -- # qd=16 00:25:41.015 15:02:23 -- host/digest.sh@80 -- # scan_dsa=false 00:25:41.015 15:02:23 -- host/digest.sh@83 -- # bperfpid=1214446 00:25:41.015 15:02:23 -- host/digest.sh@84 -- # waitforlisten 1214446 /var/tmp/bperf.sock 00:25:41.015 15:02:23 -- common/autotest_common.sh@817 -- # '[' -z 1214446 ']' 00:25:41.015 15:02:23 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:41.015 15:02:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:41.015 15:02:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:41.015 15:02:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:41.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:41.015 15:02:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:41.015 15:02:23 -- common/autotest_common.sh@10 -- # set +x 00:25:41.015 [2024-04-26 15:02:23.650323] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:25:41.015 [2024-04-26 15:02:23.650380] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1214446 ] 00:25:41.015 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:41.015 Zero copy mechanism will not be used. 00:25:41.319 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.319 [2024-04-26 15:02:23.724179] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.319 [2024-04-26 15:02:23.775626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.891 15:02:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:41.891 15:02:24 -- common/autotest_common.sh@850 -- # return 0 00:25:41.891 15:02:24 -- host/digest.sh@86 -- # false 00:25:41.891 15:02:24 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:41.891 15:02:24 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:42.152 15:02:24 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:42.152 15:02:24 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:42.412 nvme0n1 00:25:42.412 15:02:24 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:42.412 15:02:24 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:42.412 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:42.412 Zero copy mechanism will not be used. 00:25:42.412 Running I/O for 2 seconds... 00:25:44.353 00:25:44.353 Latency(us) 00:25:44.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.353 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:44.353 nvme0n1 : 2.00 3964.22 495.53 0.00 0.00 4030.58 2129.92 9393.49 00:25:44.353 =================================================================================================================== 00:25:44.353 Total : 3964.22 495.53 0.00 0.00 4030.58 2129.92 9393.49 00:25:44.353 0 00:25:44.353 15:02:26 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:44.353 15:02:26 -- host/digest.sh@93 -- # get_accel_stats 00:25:44.353 15:02:26 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:44.353 15:02:26 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:44.353 | select(.opcode=="crc32c") 00:25:44.353 | "\(.module_name) \(.executed)"' 00:25:44.353 15:02:26 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:44.613 15:02:27 -- host/digest.sh@94 -- # false 00:25:44.613 15:02:27 -- host/digest.sh@94 -- # exp_module=software 00:25:44.613 15:02:27 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:44.613 15:02:27 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:44.613 15:02:27 -- host/digest.sh@98 -- # killprocess 1214446 00:25:44.613 15:02:27 -- common/autotest_common.sh@936 -- # '[' -z 1214446 ']' 00:25:44.613 15:02:27 -- common/autotest_common.sh@940 -- # kill -0 1214446 00:25:44.613 15:02:27 -- common/autotest_common.sh@941 -- # uname 00:25:44.613 15:02:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:44.613 15:02:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1214446 00:25:44.613 15:02:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:44.613 15:02:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:44.613 15:02:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1214446' 00:25:44.613 killing process with pid 1214446 00:25:44.613 15:02:27 -- common/autotest_common.sh@955 -- # kill 1214446 00:25:44.613 Received shutdown signal, test time was about 2.000000 seconds 00:25:44.613 00:25:44.613 Latency(us) 00:25:44.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.613 =================================================================================================================== 00:25:44.614 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:44.614 15:02:27 -- common/autotest_common.sh@960 -- # wait 1214446 00:25:44.874 15:02:27 -- host/digest.sh@132 -- # killprocess 1212042 00:25:44.874 15:02:27 -- common/autotest_common.sh@936 -- # '[' -z 1212042 ']' 00:25:44.874 15:02:27 -- common/autotest_common.sh@940 -- # kill -0 1212042 00:25:44.874 15:02:27 -- common/autotest_common.sh@941 -- # uname 00:25:44.874 15:02:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:44.874 15:02:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1212042 00:25:44.874 15:02:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:44.874 15:02:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:44.874 15:02:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1212042' 00:25:44.874 killing process with pid 1212042 00:25:44.874 15:02:27 -- common/autotest_common.sh@955 -- # kill 1212042 00:25:44.874 15:02:27 -- common/autotest_common.sh@960 -- # wait 1212042 00:25:44.874 00:25:44.874 real 0m16.346s 00:25:44.874 user 0m32.289s 00:25:44.874 sys 0m3.290s 00:25:44.874 15:02:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:44.874 15:02:27 -- common/autotest_common.sh@10 -- # set +x 00:25:44.874 ************************************ 00:25:44.874 END TEST nvmf_digest_clean 00:25:44.874 ************************************ 00:25:44.874 15:02:27 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:44.874 15:02:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:44.874 15:02:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:44.874 15:02:27 -- common/autotest_common.sh@10 -- # set +x 00:25:45.135 ************************************ 00:25:45.135 START TEST nvmf_digest_error 00:25:45.135 ************************************ 00:25:45.135 15:02:27 -- common/autotest_common.sh@1111 -- # run_digest_error 00:25:45.135 15:02:27 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:45.135 15:02:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:45.135 15:02:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:45.135 15:02:27 -- common/autotest_common.sh@10 -- # set +x 00:25:45.135 15:02:27 -- nvmf/common.sh@470 -- # nvmfpid=1215172 00:25:45.135 15:02:27 -- nvmf/common.sh@471 -- # waitforlisten 1215172 00:25:45.135 15:02:27 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:45.135 15:02:27 -- common/autotest_common.sh@817 -- # '[' -z 1215172 ']' 00:25:45.135 15:02:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.135 15:02:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:45.135 15:02:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.135 15:02:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:45.135 15:02:27 -- common/autotest_common.sh@10 -- # set +x 00:25:45.135 [2024-04-26 15:02:27.735473] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:25:45.135 [2024-04-26 15:02:27.735555] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:45.135 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.396 [2024-04-26 15:02:27.809530] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.396 [2024-04-26 15:02:27.881994] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:45.396 [2024-04-26 15:02:27.882036] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:45.396 [2024-04-26 15:02:27.882043] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:45.396 [2024-04-26 15:02:27.882050] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:45.396 [2024-04-26 15:02:27.882055] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:45.396 [2024-04-26 15:02:27.882078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.971 15:02:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:45.971 15:02:28 -- common/autotest_common.sh@850 -- # return 0 00:25:45.971 15:02:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:45.971 15:02:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:45.971 15:02:28 -- common/autotest_common.sh@10 -- # set +x 00:25:45.971 15:02:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:45.971 15:02:28 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:45.971 15:02:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.971 15:02:28 -- common/autotest_common.sh@10 -- # set +x 00:25:45.971 [2024-04-26 15:02:28.547985] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:45.971 15:02:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.971 15:02:28 -- host/digest.sh@105 -- # common_target_config 00:25:45.971 15:02:28 -- host/digest.sh@43 -- # rpc_cmd 00:25:45.971 15:02:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.971 15:02:28 -- common/autotest_common.sh@10 -- # set +x 00:25:45.971 null0 00:25:45.971 [2024-04-26 15:02:28.628682] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.232 [2024-04-26 15:02:28.652876] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:46.232 15:02:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.232 15:02:28 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:46.232 15:02:28 -- host/digest.sh@54 -- # local rw bs qd 00:25:46.232 15:02:28 -- host/digest.sh@56 -- # rw=randread 00:25:46.232 15:02:28 -- host/digest.sh@56 -- # bs=4096 00:25:46.232 15:02:28 -- host/digest.sh@56 -- # qd=128 00:25:46.232 15:02:28 -- host/digest.sh@58 -- # bperfpid=1215518 00:25:46.232 15:02:28 -- host/digest.sh@60 -- # waitforlisten 1215518 /var/tmp/bperf.sock 00:25:46.232 15:02:28 -- common/autotest_common.sh@817 -- # '[' -z 1215518 ']' 00:25:46.232 15:02:28 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:46.232 15:02:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:46.232 15:02:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:46.232 15:02:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:46.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:46.232 15:02:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:46.232 15:02:28 -- common/autotest_common.sh@10 -- # set +x 00:25:46.232 [2024-04-26 15:02:28.705737] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:25:46.232 [2024-04-26 15:02:28.705782] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1215518 ] 00:25:46.232 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.232 [2024-04-26 15:02:28.780814] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.232 [2024-04-26 15:02:28.833489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.173 15:02:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:47.174 15:02:29 -- common/autotest_common.sh@850 -- # return 0 00:25:47.174 15:02:29 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:47.174 15:02:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:47.174 15:02:29 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:47.174 15:02:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.174 15:02:29 -- common/autotest_common.sh@10 -- # set +x 00:25:47.174 15:02:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.174 15:02:29 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:47.174 15:02:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:47.435 nvme0n1 00:25:47.435 15:02:29 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:47.435 15:02:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.435 15:02:29 -- common/autotest_common.sh@10 -- # set +x 00:25:47.435 15:02:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.435 15:02:29 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:47.435 15:02:29 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:47.435 Running I/O for 2 seconds... 00:25:47.435 [2024-04-26 15:02:29.999878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.435 [2024-04-26 15:02:29.999909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.435 [2024-04-26 15:02:29.999917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.435 [2024-04-26 15:02:30.015761] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.435 [2024-04-26 15:02:30.015783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.435 [2024-04-26 15:02:30.015790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.435 [2024-04-26 15:02:30.028569] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.435 [2024-04-26 15:02:30.028588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.435 [2024-04-26 15:02:30.028594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.435 [2024-04-26 15:02:30.041119] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.435 [2024-04-26 15:02:30.041137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.435 [2024-04-26 15:02:30.041144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.435 [2024-04-26 15:02:30.052093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.435 [2024-04-26 15:02:30.052110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.435 [2024-04-26 15:02:30.052117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.435 [2024-04-26 15:02:30.065498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.435 [2024-04-26 15:02:30.065519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.435 [2024-04-26 15:02:30.065526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.435 [2024-04-26 15:02:30.079326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.435 [2024-04-26 15:02:30.079344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.435 [2024-04-26 15:02:30.079351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.435 [2024-04-26 15:02:30.092225] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.435 [2024-04-26 15:02:30.092242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.435 [2024-04-26 15:02:30.092248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.697 [2024-04-26 15:02:30.104817] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.697 [2024-04-26 15:02:30.104834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.697 [2024-04-26 15:02:30.104850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.697 [2024-04-26 15:02:30.118491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.697 [2024-04-26 15:02:30.118508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.697 [2024-04-26 15:02:30.118514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.697 [2024-04-26 15:02:30.130433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.697 [2024-04-26 15:02:30.130450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.697 [2024-04-26 15:02:30.130457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.697 [2024-04-26 15:02:30.140605] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.697 [2024-04-26 15:02:30.140622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.697 [2024-04-26 15:02:30.140629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.697 [2024-04-26 15:02:30.153534] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.697 [2024-04-26 15:02:30.153552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.697 [2024-04-26 15:02:30.153558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.697 [2024-04-26 15:02:30.167228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.697 [2024-04-26 15:02:30.167245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.697 [2024-04-26 15:02:30.167252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.697 [2024-04-26 15:02:30.180534] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.697 [2024-04-26 15:02:30.180551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.697 [2024-04-26 15:02:30.180557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.697 [2024-04-26 15:02:30.191121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.697 [2024-04-26 15:02:30.191138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.697 [2024-04-26 15:02:30.191144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.697 [2024-04-26 15:02:30.205060] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.697 [2024-04-26 15:02:30.205077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.697 [2024-04-26 15:02:30.205083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.697 [2024-04-26 15:02:30.218595] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.697 [2024-04-26 15:02:30.218612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.697 [2024-04-26 15:02:30.218618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.697 [2024-04-26 15:02:30.231662] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.697 [2024-04-26 15:02:30.231680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.697 [2024-04-26 15:02:30.231686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.697 [2024-04-26 15:02:30.245121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.697 [2024-04-26 15:02:30.245138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.697 [2024-04-26 15:02:30.245145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.697 [2024-04-26 15:02:30.256844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.697 [2024-04-26 15:02:30.256861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.697 [2024-04-26 15:02:30.256868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.697 [2024-04-26 15:02:30.269625] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.697 [2024-04-26 15:02:30.269642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.697 [2024-04-26 15:02:30.269648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.697 [2024-04-26 15:02:30.282583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.697 [2024-04-26 15:02:30.282599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.697 [2024-04-26 15:02:30.282610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.697 [2024-04-26 15:02:30.295225] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.697 [2024-04-26 15:02:30.295241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.697 [2024-04-26 15:02:30.295248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.697 [2024-04-26 15:02:30.306306] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.697 [2024-04-26 15:02:30.306322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.697 [2024-04-26 15:02:30.306329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.697 [2024-04-26 15:02:30.319901] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.697 [2024-04-26 15:02:30.319918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.697 [2024-04-26 15:02:30.319924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.697 [2024-04-26 15:02:30.332515] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.697 [2024-04-26 15:02:30.332531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.697 [2024-04-26 15:02:30.332537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.698 [2024-04-26 15:02:30.345740] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.698 [2024-04-26 15:02:30.345756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.698 [2024-04-26 15:02:30.345762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.698 [2024-04-26 15:02:30.357251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.698 [2024-04-26 15:02:30.357267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.698 [2024-04-26 15:02:30.357273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.959 [2024-04-26 15:02:30.370273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.959 [2024-04-26 15:02:30.370290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.959 [2024-04-26 15:02:30.370296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.959 [2024-04-26 15:02:30.383047] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.959 [2024-04-26 15:02:30.383064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.959 [2024-04-26 15:02:30.383070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.959 [2024-04-26 15:02:30.395274] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.959 [2024-04-26 15:02:30.395296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.959 [2024-04-26 15:02:30.395302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.959 [2024-04-26 15:02:30.405791] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.959 [2024-04-26 15:02:30.405807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.959 [2024-04-26 15:02:30.405813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.959 [2024-04-26 15:02:30.420100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.959 [2024-04-26 15:02:30.420117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.959 [2024-04-26 15:02:30.420123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.959 [2024-04-26 15:02:30.433799] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.959 [2024-04-26 15:02:30.433816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.959 [2024-04-26 15:02:30.433822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.959 [2024-04-26 15:02:30.446529] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.959 [2024-04-26 15:02:30.446547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.959 [2024-04-26 15:02:30.446553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.959 [2024-04-26 15:02:30.459597] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.959 [2024-04-26 15:02:30.459613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.959 [2024-04-26 15:02:30.459619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.959 [2024-04-26 15:02:30.471545] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.959 [2024-04-26 15:02:30.471562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.959 [2024-04-26 15:02:30.471568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.959 [2024-04-26 15:02:30.484269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.959 [2024-04-26 15:02:30.484286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.959 [2024-04-26 15:02:30.484292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.959 [2024-04-26 15:02:30.495944] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.959 [2024-04-26 15:02:30.495961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.959 [2024-04-26 15:02:30.495967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.959 [2024-04-26 15:02:30.509335] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.959 [2024-04-26 15:02:30.509352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.959 [2024-04-26 15:02:30.509358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.959 [2024-04-26 15:02:30.522625] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.959 [2024-04-26 15:02:30.522642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.959 [2024-04-26 15:02:30.522648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.959 [2024-04-26 15:02:30.534572] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.959 [2024-04-26 15:02:30.534588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.959 [2024-04-26 15:02:30.534595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.959 [2024-04-26 15:02:30.546416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.959 [2024-04-26 15:02:30.546433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.959 [2024-04-26 15:02:30.546440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.959 [2024-04-26 15:02:30.558132] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.959 [2024-04-26 15:02:30.558149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.959 [2024-04-26 15:02:30.558156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.959 [2024-04-26 15:02:30.572715] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.959 [2024-04-26 15:02:30.572731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.959 [2024-04-26 15:02:30.572737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.959 [2024-04-26 15:02:30.583760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.959 [2024-04-26 15:02:30.583777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.959 [2024-04-26 15:02:30.583784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.959 [2024-04-26 15:02:30.598283] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.959 [2024-04-26 15:02:30.598300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.959 [2024-04-26 15:02:30.598306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.959 [2024-04-26 15:02:30.611314] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.959 [2024-04-26 15:02:30.611333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.959 [2024-04-26 15:02:30.611340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.959 [2024-04-26 15:02:30.623386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:47.959 [2024-04-26 15:02:30.623402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.959 [2024-04-26 15:02:30.623409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.221 [2024-04-26 15:02:30.637228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.221 [2024-04-26 15:02:30.637245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.221 [2024-04-26 15:02:30.637251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.221 [2024-04-26 15:02:30.649292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.221 [2024-04-26 15:02:30.649308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.221 [2024-04-26 15:02:30.649315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.221 [2024-04-26 15:02:30.660429] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.221 [2024-04-26 15:02:30.660446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.221 [2024-04-26 15:02:30.660453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.221 [2024-04-26 15:02:30.674159] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.221 [2024-04-26 15:02:30.674175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.221 [2024-04-26 15:02:30.674181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.221 [2024-04-26 15:02:30.687188] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.221 [2024-04-26 15:02:30.687204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.221 [2024-04-26 15:02:30.687210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.221 [2024-04-26 15:02:30.699862] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.221 [2024-04-26 15:02:30.699879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.221 [2024-04-26 15:02:30.699885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.221 [2024-04-26 15:02:30.713369] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.221 [2024-04-26 15:02:30.713385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.221 [2024-04-26 15:02:30.713392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.221 [2024-04-26 15:02:30.725280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.221 [2024-04-26 15:02:30.725297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.221 [2024-04-26 15:02:30.725304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.221 [2024-04-26 15:02:30.736492] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.221 [2024-04-26 15:02:30.736509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.221 [2024-04-26 15:02:30.736515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.221 [2024-04-26 15:02:30.747663] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.221 [2024-04-26 15:02:30.747680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.221 [2024-04-26 15:02:30.747687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.221 [2024-04-26 15:02:30.761056] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.221 [2024-04-26 15:02:30.761074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.221 [2024-04-26 15:02:30.761080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.221 [2024-04-26 15:02:30.774115] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.221 [2024-04-26 15:02:30.774132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.221 [2024-04-26 15:02:30.774138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.221 [2024-04-26 15:02:30.788025] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.221 [2024-04-26 15:02:30.788042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.221 [2024-04-26 15:02:30.788048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.221 [2024-04-26 15:02:30.801773] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.221 [2024-04-26 15:02:30.801790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.221 [2024-04-26 15:02:30.801796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.221 [2024-04-26 15:02:30.813401] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.221 [2024-04-26 15:02:30.813418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.221 [2024-04-26 15:02:30.813424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.221 [2024-04-26 15:02:30.823868] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.221 [2024-04-26 15:02:30.823884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.221 [2024-04-26 15:02:30.823978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.221 [2024-04-26 15:02:30.836912] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.221 [2024-04-26 15:02:30.836929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.221 [2024-04-26 15:02:30.836935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.221 [2024-04-26 15:02:30.850254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.221 [2024-04-26 15:02:30.850271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.221 [2024-04-26 15:02:30.850277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.221 [2024-04-26 15:02:30.864874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.221 [2024-04-26 15:02:30.864891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.221 [2024-04-26 15:02:30.864897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.221 [2024-04-26 15:02:30.876214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.221 [2024-04-26 15:02:30.876231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.221 [2024-04-26 15:02:30.876237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.483 [2024-04-26 15:02:30.890905] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.483 [2024-04-26 15:02:30.890922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.483 [2024-04-26 15:02:30.890928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.483 [2024-04-26 15:02:30.902092] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.483 [2024-04-26 15:02:30.902108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.483 [2024-04-26 15:02:30.902114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.483 [2024-04-26 15:02:30.914088] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.483 [2024-04-26 15:02:30.914104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.483 [2024-04-26 15:02:30.914111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.483 [2024-04-26 15:02:30.928285] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.483 [2024-04-26 15:02:30.928301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.483 [2024-04-26 15:02:30.928307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.483 [2024-04-26 15:02:30.941512] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.483 [2024-04-26 15:02:30.941531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.483 [2024-04-26 15:02:30.941537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.483 [2024-04-26 15:02:30.951619] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.483 [2024-04-26 15:02:30.951636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.483 [2024-04-26 15:02:30.951642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.483 [2024-04-26 15:02:30.964990] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.483 [2024-04-26 15:02:30.965007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.483 [2024-04-26 15:02:30.965013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.483 [2024-04-26 15:02:30.978417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.483 [2024-04-26 15:02:30.978434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.483 [2024-04-26 15:02:30.978440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.483 [2024-04-26 15:02:30.991636] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.483 [2024-04-26 15:02:30.991653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.483 [2024-04-26 15:02:30.991659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.483 [2024-04-26 15:02:31.003798] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.483 [2024-04-26 15:02:31.003815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.483 [2024-04-26 15:02:31.003822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.483 [2024-04-26 15:02:31.018080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.483 [2024-04-26 15:02:31.018097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.483 [2024-04-26 15:02:31.018103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.483 [2024-04-26 15:02:31.029874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.483 [2024-04-26 15:02:31.029890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.484 [2024-04-26 15:02:31.029897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.484 [2024-04-26 15:02:31.042119] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.484 [2024-04-26 15:02:31.042135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.484 [2024-04-26 15:02:31.042142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.484 [2024-04-26 15:02:31.054213] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.484 [2024-04-26 15:02:31.054230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.484 [2024-04-26 15:02:31.054237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.484 [2024-04-26 15:02:31.067421] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.484 [2024-04-26 15:02:31.067438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.484 [2024-04-26 15:02:31.067444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.484 [2024-04-26 15:02:31.080711] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.484 [2024-04-26 15:02:31.080728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.484 [2024-04-26 15:02:31.080735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.484 [2024-04-26 15:02:31.093875] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.484 [2024-04-26 15:02:31.093892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.484 [2024-04-26 15:02:31.093898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.484 [2024-04-26 15:02:31.106684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.484 [2024-04-26 15:02:31.106700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.484 [2024-04-26 15:02:31.106706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.484 [2024-04-26 15:02:31.118276] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.484 [2024-04-26 15:02:31.118292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.484 [2024-04-26 15:02:31.118299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.484 [2024-04-26 15:02:31.130705] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.484 [2024-04-26 15:02:31.130722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.484 [2024-04-26 15:02:31.130728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.484 [2024-04-26 15:02:31.143437] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.484 [2024-04-26 15:02:31.143453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.484 [2024-04-26 15:02:31.143460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.745 [2024-04-26 15:02:31.156390] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.745 [2024-04-26 15:02:31.156407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.745 [2024-04-26 15:02:31.156416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.745 [2024-04-26 15:02:31.166824] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.745 [2024-04-26 15:02:31.166845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.745 [2024-04-26 15:02:31.166851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.745 [2024-04-26 15:02:31.180392] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.745 [2024-04-26 15:02:31.180407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.745 [2024-04-26 15:02:31.180413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.745 [2024-04-26 15:02:31.193865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.745 [2024-04-26 15:02:31.193881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.745 [2024-04-26 15:02:31.193888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.745 [2024-04-26 15:02:31.205658] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.745 [2024-04-26 15:02:31.205674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.745 [2024-04-26 15:02:31.205680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.745 [2024-04-26 15:02:31.219446] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.745 [2024-04-26 15:02:31.219463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.745 [2024-04-26 15:02:31.219469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.745 [2024-04-26 15:02:31.231371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.745 [2024-04-26 15:02:31.231388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.745 [2024-04-26 15:02:31.231394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.745 [2024-04-26 15:02:31.243371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.745 [2024-04-26 15:02:31.243387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.745 [2024-04-26 15:02:31.243394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.745 [2024-04-26 15:02:31.256155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.745 [2024-04-26 15:02:31.256173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.745 [2024-04-26 15:02:31.256179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.745 [2024-04-26 15:02:31.269730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.745 [2024-04-26 15:02:31.269747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.745 [2024-04-26 15:02:31.269754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.745 [2024-04-26 15:02:31.283042] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.745 [2024-04-26 15:02:31.283059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.745 [2024-04-26 15:02:31.283065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.745 [2024-04-26 15:02:31.295425] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.745 [2024-04-26 15:02:31.295441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.746 [2024-04-26 15:02:31.295448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.746 [2024-04-26 15:02:31.307440] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.746 [2024-04-26 15:02:31.307457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.746 [2024-04-26 15:02:31.307463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.746 [2024-04-26 15:02:31.321080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.746 [2024-04-26 15:02:31.321097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.746 [2024-04-26 15:02:31.321103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.746 [2024-04-26 15:02:31.334407] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.746 [2024-04-26 15:02:31.334425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.746 [2024-04-26 15:02:31.334432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.746 [2024-04-26 15:02:31.345353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.746 [2024-04-26 15:02:31.345370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.746 [2024-04-26 15:02:31.345377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.746 [2024-04-26 15:02:31.358361] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.746 [2024-04-26 15:02:31.358378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.746 [2024-04-26 15:02:31.358385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.746 [2024-04-26 15:02:31.370835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.746 [2024-04-26 15:02:31.370857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.746 [2024-04-26 15:02:31.370866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.746 [2024-04-26 15:02:31.383297] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.746 [2024-04-26 15:02:31.383313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.746 [2024-04-26 15:02:31.383320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.746 [2024-04-26 15:02:31.395874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.746 [2024-04-26 15:02:31.395891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.746 [2024-04-26 15:02:31.395898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.746 [2024-04-26 15:02:31.408422] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:48.746 [2024-04-26 15:02:31.408438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.746 [2024-04-26 15:02:31.408445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.007 [2024-04-26 15:02:31.421363] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.007 [2024-04-26 15:02:31.421380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.007 [2024-04-26 15:02:31.421387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.007 [2024-04-26 15:02:31.433772] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.007 [2024-04-26 15:02:31.433789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.007 [2024-04-26 15:02:31.433795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.007 [2024-04-26 15:02:31.445702] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.007 [2024-04-26 15:02:31.445718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.008 [2024-04-26 15:02:31.445725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.008 [2024-04-26 15:02:31.457601] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.008 [2024-04-26 15:02:31.457617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.008 [2024-04-26 15:02:31.457623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.008 [2024-04-26 15:02:31.470348] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.008 [2024-04-26 15:02:31.470365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.008 [2024-04-26 15:02:31.470371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.008 [2024-04-26 15:02:31.483847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.008 [2024-04-26 15:02:31.483868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.008 [2024-04-26 15:02:31.483874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.008 [2024-04-26 15:02:31.497514] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.008 [2024-04-26 15:02:31.497531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.008 [2024-04-26 15:02:31.497537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.008 [2024-04-26 15:02:31.510527] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.008 [2024-04-26 15:02:31.510544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.008 [2024-04-26 15:02:31.510550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.008 [2024-04-26 15:02:31.522410] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.008 [2024-04-26 15:02:31.522426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.008 [2024-04-26 15:02:31.522433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.008 [2024-04-26 15:02:31.533720] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.008 [2024-04-26 15:02:31.533737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.008 [2024-04-26 15:02:31.533743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.008 [2024-04-26 15:02:31.546144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.008 [2024-04-26 15:02:31.546161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.008 [2024-04-26 15:02:31.546167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.008 [2024-04-26 15:02:31.559658] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.008 [2024-04-26 15:02:31.559675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.008 [2024-04-26 15:02:31.559681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.008 [2024-04-26 15:02:31.572156] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.008 [2024-04-26 15:02:31.572173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.008 [2024-04-26 15:02:31.572179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.008 [2024-04-26 15:02:31.585552] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.008 [2024-04-26 15:02:31.585568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.008 [2024-04-26 15:02:31.585574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.008 [2024-04-26 15:02:31.599101] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.008 [2024-04-26 15:02:31.599118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.008 [2024-04-26 15:02:31.599124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.008 [2024-04-26 15:02:31.612039] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.008 [2024-04-26 15:02:31.612056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.008 [2024-04-26 15:02:31.612062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.008 [2024-04-26 15:02:31.624912] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.008 [2024-04-26 15:02:31.624928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.008 [2024-04-26 15:02:31.624935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.008 [2024-04-26 15:02:31.638284] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.008 [2024-04-26 15:02:31.638301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.008 [2024-04-26 15:02:31.638307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.008 [2024-04-26 15:02:31.650318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.008 [2024-04-26 15:02:31.650334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.008 [2024-04-26 15:02:31.650340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.008 [2024-04-26 15:02:31.660761] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.008 [2024-04-26 15:02:31.660778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.008 [2024-04-26 15:02:31.660784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.269 [2024-04-26 15:02:31.674885] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.269 [2024-04-26 15:02:31.674904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.269 [2024-04-26 15:02:31.674911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.269 [2024-04-26 15:02:31.687463] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.269 [2024-04-26 15:02:31.687479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.269 [2024-04-26 15:02:31.687486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.269 [2024-04-26 15:02:31.700872] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.269 [2024-04-26 15:02:31.700889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.269 [2024-04-26 15:02:31.700899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.269 [2024-04-26 15:02:31.713218] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.269 [2024-04-26 15:02:31.713237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.269 [2024-04-26 15:02:31.713243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.269 [2024-04-26 15:02:31.726072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.269 [2024-04-26 15:02:31.726090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.269 [2024-04-26 15:02:31.726096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.269 [2024-04-26 15:02:31.739278] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.269 [2024-04-26 15:02:31.739295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.269 [2024-04-26 15:02:31.739301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.269 [2024-04-26 15:02:31.752625] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.269 [2024-04-26 15:02:31.752641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.269 [2024-04-26 15:02:31.752648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.269 [2024-04-26 15:02:31.764909] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.269 [2024-04-26 15:02:31.764926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.269 [2024-04-26 15:02:31.764932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.269 [2024-04-26 15:02:31.776460] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.269 [2024-04-26 15:02:31.776477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.269 [2024-04-26 15:02:31.776483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.269 [2024-04-26 15:02:31.789057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.269 [2024-04-26 15:02:31.789073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.269 [2024-04-26 15:02:31.789080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.269 [2024-04-26 15:02:31.802749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.269 [2024-04-26 15:02:31.802766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.269 [2024-04-26 15:02:31.802772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.269 [2024-04-26 15:02:31.814561] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.269 [2024-04-26 15:02:31.814579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.269 [2024-04-26 15:02:31.814585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.270 [2024-04-26 15:02:31.828158] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.270 [2024-04-26 15:02:31.828175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.270 [2024-04-26 15:02:31.828181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.270 [2024-04-26 15:02:31.842812] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.270 [2024-04-26 15:02:31.842830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.270 [2024-04-26 15:02:31.842836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.270 [2024-04-26 15:02:31.854913] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.270 [2024-04-26 15:02:31.854930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.270 [2024-04-26 15:02:31.854936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.270 [2024-04-26 15:02:31.866608] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.270 [2024-04-26 15:02:31.866625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.270 [2024-04-26 15:02:31.866631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.270 [2024-04-26 15:02:31.879803] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.270 [2024-04-26 15:02:31.879820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.270 [2024-04-26 15:02:31.879826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.270 [2024-04-26 15:02:31.894098] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.270 [2024-04-26 15:02:31.894115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.270 [2024-04-26 15:02:31.894122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.270 [2024-04-26 15:02:31.907260] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.270 [2024-04-26 15:02:31.907278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.270 [2024-04-26 15:02:31.907284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.270 [2024-04-26 15:02:31.918994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.270 [2024-04-26 15:02:31.919010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.270 [2024-04-26 15:02:31.919019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.270 [2024-04-26 15:02:31.932471] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.270 [2024-04-26 15:02:31.932487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.270 [2024-04-26 15:02:31.932493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.532 [2024-04-26 15:02:31.944346] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.532 [2024-04-26 15:02:31.944363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.532 [2024-04-26 15:02:31.944369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.532 [2024-04-26 15:02:31.957636] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.532 [2024-04-26 15:02:31.957653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.532 [2024-04-26 15:02:31.957659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.532 [2024-04-26 15:02:31.971007] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.532 [2024-04-26 15:02:31.971024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.532 [2024-04-26 15:02:31.971030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.532 [2024-04-26 15:02:31.982512] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14d94c0) 00:25:49.532 [2024-04-26 15:02:31.982529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.532 [2024-04-26 15:02:31.982535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.532 00:25:49.532 Latency(us) 00:25:49.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.532 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:49.532 nvme0n1 : 2.00 20098.98 78.51 0.00 0.00 6363.27 2334.72 16602.45 00:25:49.532 =================================================================================================================== 00:25:49.532 Total : 20098.98 78.51 0.00 0.00 6363.27 2334.72 16602.45 00:25:49.532 0 00:25:49.532 15:02:32 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:49.532 15:02:32 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:49.532 15:02:32 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:49.532 | .driver_specific 00:25:49.532 | .nvme_error 00:25:49.532 | .status_code 00:25:49.532 | .command_transient_transport_error' 00:25:49.532 15:02:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:49.532 15:02:32 -- host/digest.sh@71 -- # (( 157 > 0 )) 00:25:49.532 15:02:32 -- host/digest.sh@73 -- # killprocess 1215518 00:25:49.532 15:02:32 -- common/autotest_common.sh@936 -- # '[' -z 1215518 ']' 00:25:49.532 15:02:32 -- common/autotest_common.sh@940 -- # kill -0 1215518 00:25:49.532 15:02:32 -- common/autotest_common.sh@941 -- # uname 00:25:49.532 15:02:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:49.532 15:02:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1215518 00:25:49.793 15:02:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:49.793 15:02:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:49.793 15:02:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1215518' 00:25:49.793 killing process with pid 1215518 00:25:49.794 15:02:32 -- common/autotest_common.sh@955 -- # kill 1215518 00:25:49.794 Received shutdown signal, test time was about 2.000000 seconds 00:25:49.794 00:25:49.794 Latency(us) 00:25:49.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.794 =================================================================================================================== 00:25:49.794 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:49.794 15:02:32 -- common/autotest_common.sh@960 -- # wait 1215518 00:25:49.794 15:02:32 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:49.794 15:02:32 -- host/digest.sh@54 -- # local rw bs qd 00:25:49.794 15:02:32 -- host/digest.sh@56 -- # rw=randread 00:25:49.794 15:02:32 -- host/digest.sh@56 -- # bs=131072 00:25:49.794 15:02:32 -- host/digest.sh@56 -- # qd=16 00:25:49.794 15:02:32 -- host/digest.sh@58 -- # bperfpid=1216195 00:25:49.794 15:02:32 -- host/digest.sh@60 -- # waitforlisten 1216195 /var/tmp/bperf.sock 00:25:49.794 15:02:32 -- common/autotest_common.sh@817 -- # '[' -z 1216195 ']' 00:25:49.794 15:02:32 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:49.794 15:02:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:49.794 15:02:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:49.794 15:02:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:49.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:49.794 15:02:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:49.794 15:02:32 -- common/autotest_common.sh@10 -- # set +x 00:25:49.794 [2024-04-26 15:02:32.395375] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:25:49.794 [2024-04-26 15:02:32.395443] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1216195 ] 00:25:49.794 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:49.794 Zero copy mechanism will not be used. 00:25:49.794 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.055 [2024-04-26 15:02:32.473172] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.055 [2024-04-26 15:02:32.523516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.626 15:02:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:50.626 15:02:33 -- common/autotest_common.sh@850 -- # return 0 00:25:50.626 15:02:33 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:50.626 15:02:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:50.886 15:02:33 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:50.886 15:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:50.886 15:02:33 -- common/autotest_common.sh@10 -- # set +x 00:25:50.886 15:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:50.886 15:02:33 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:50.886 15:02:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:51.146 nvme0n1 00:25:51.146 15:02:33 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:51.146 15:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:51.146 15:02:33 -- common/autotest_common.sh@10 -- # set +x 00:25:51.146 15:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:51.146 15:02:33 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:51.146 15:02:33 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:51.146 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:51.146 Zero copy mechanism will not be used. 00:25:51.146 Running I/O for 2 seconds... 00:25:51.146 [2024-04-26 15:02:33.771906] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.146 [2024-04-26 15:02:33.771939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.146 [2024-04-26 15:02:33.771948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:51.146 [2024-04-26 15:02:33.781906] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.146 [2024-04-26 15:02:33.781929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.146 [2024-04-26 15:02:33.781936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:51.146 [2024-04-26 15:02:33.792937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.146 [2024-04-26 15:02:33.792957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.146 [2024-04-26 15:02:33.792963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:51.146 [2024-04-26 15:02:33.801359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.146 [2024-04-26 15:02:33.801379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.146 [2024-04-26 15:02:33.801386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:51.406 [2024-04-26 15:02:33.811826] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.406 [2024-04-26 15:02:33.811854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.406 [2024-04-26 15:02:33.811860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:51.406 [2024-04-26 15:02:33.821169] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.406 [2024-04-26 15:02:33.821189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.406 [2024-04-26 15:02:33.821196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:51.406 [2024-04-26 15:02:33.832955] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.406 [2024-04-26 15:02:33.832975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.406 [2024-04-26 15:02:33.832981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:51.406 [2024-04-26 15:02:33.844377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.406 [2024-04-26 15:02:33.844396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.406 [2024-04-26 15:02:33.844402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:51.406 [2024-04-26 15:02:33.856900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.406 [2024-04-26 15:02:33.856919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.406 [2024-04-26 15:02:33.856925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:51.406 [2024-04-26 15:02:33.867963] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.406 [2024-04-26 15:02:33.867982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.406 [2024-04-26 15:02:33.867988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:51.406 [2024-04-26 15:02:33.877539] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.406 [2024-04-26 15:02:33.877558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.406 [2024-04-26 15:02:33.877565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:51.406 [2024-04-26 15:02:33.886125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.406 [2024-04-26 15:02:33.886144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.406 [2024-04-26 15:02:33.886150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:51.406 [2024-04-26 15:02:33.895763] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.406 [2024-04-26 15:02:33.895781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.406 [2024-04-26 15:02:33.895787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:51.406 [2024-04-26 15:02:33.906034] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.406 [2024-04-26 15:02:33.906053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.406 [2024-04-26 15:02:33.906060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:51.406 [2024-04-26 15:02:33.916851] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.406 [2024-04-26 15:02:33.916871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.406 [2024-04-26 15:02:33.916878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:51.406 [2024-04-26 15:02:33.925745] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.407 [2024-04-26 15:02:33.925763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.407 [2024-04-26 15:02:33.925769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:51.407 [2024-04-26 15:02:33.935384] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.407 [2024-04-26 15:02:33.935402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.407 [2024-04-26 15:02:33.935412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:51.407 [2024-04-26 15:02:33.945130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.407 [2024-04-26 15:02:33.945148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.407 [2024-04-26 15:02:33.945154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:51.407 [2024-04-26 15:02:33.954242] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.407 [2024-04-26 15:02:33.954260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.407 [2024-04-26 15:02:33.954266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:51.407 [2024-04-26 15:02:33.962442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.407 [2024-04-26 15:02:33.962460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.407 [2024-04-26 15:02:33.962466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:51.407 [2024-04-26 15:02:33.972750] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.407 [2024-04-26 15:02:33.972769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.407 [2024-04-26 15:02:33.972775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:51.407 [2024-04-26 15:02:33.982383] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.407 [2024-04-26 15:02:33.982401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.407 [2024-04-26 15:02:33.982408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:51.407 [2024-04-26 15:02:33.992646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.407 [2024-04-26 15:02:33.992663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.407 [2024-04-26 15:02:33.992669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:51.407 [2024-04-26 15:02:34.003149] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.407 [2024-04-26 15:02:34.003167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.407 [2024-04-26 15:02:34.003173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:51.407 [2024-04-26 15:02:34.016497] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.407 [2024-04-26 15:02:34.016516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.407 [2024-04-26 15:02:34.016522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:51.407 [2024-04-26 15:02:34.029922] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.407 [2024-04-26 15:02:34.029947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.407 [2024-04-26 15:02:34.029954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:51.407 [2024-04-26 15:02:34.042873] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.407 [2024-04-26 15:02:34.042892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.407 [2024-04-26 15:02:34.042898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:51.407 [2024-04-26 15:02:34.056562] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.407 [2024-04-26 15:02:34.056581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.407 [2024-04-26 15:02:34.056587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:51.407 [2024-04-26 15:02:34.069722] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.407 [2024-04-26 15:02:34.069741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.407 [2024-04-26 15:02:34.069747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:51.667 [2024-04-26 15:02:34.082665] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.667 [2024-04-26 15:02:34.082684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.667 [2024-04-26 15:02:34.082690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:51.667 [2024-04-26 15:02:34.095774] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.667 [2024-04-26 15:02:34.095792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.667 [2024-04-26 15:02:34.095798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:51.667 [2024-04-26 15:02:34.106929] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.667 [2024-04-26 15:02:34.106947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.667 [2024-04-26 15:02:34.106953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:51.667 [2024-04-26 15:02:34.118135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.667 [2024-04-26 15:02:34.118153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.667 [2024-04-26 15:02:34.118160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:51.667 [2024-04-26 15:02:34.128433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.667 [2024-04-26 15:02:34.128452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.667 [2024-04-26 15:02:34.128458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:51.667 [2024-04-26 15:02:34.141784] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.667 [2024-04-26 15:02:34.141802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.667 [2024-04-26 15:02:34.141808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:51.667 [2024-04-26 15:02:34.152958] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.667 [2024-04-26 15:02:34.152976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.667 [2024-04-26 15:02:34.152982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:51.667 [2024-04-26 15:02:34.164705] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.667 [2024-04-26 15:02:34.164722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.667 [2024-04-26 15:02:34.164729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:51.667 [2024-04-26 15:02:34.177059] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.667 [2024-04-26 15:02:34.177076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.667 [2024-04-26 15:02:34.177082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:51.667 [2024-04-26 15:02:34.186020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.667 [2024-04-26 15:02:34.186038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.667 [2024-04-26 15:02:34.186044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:51.667 [2024-04-26 15:02:34.194393] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.667 [2024-04-26 15:02:34.194411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.667 [2024-04-26 15:02:34.194417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:51.667 [2024-04-26 15:02:34.204847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.668 [2024-04-26 15:02:34.204864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.668 [2024-04-26 15:02:34.204870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:51.668 [2024-04-26 15:02:34.215964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.668 [2024-04-26 15:02:34.215983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.668 [2024-04-26 15:02:34.215989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:51.668 [2024-04-26 15:02:34.225779] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.668 [2024-04-26 15:02:34.225797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.668 [2024-04-26 15:02:34.225807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:51.668 [2024-04-26 15:02:34.236335] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.668 [2024-04-26 15:02:34.236353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.668 [2024-04-26 15:02:34.236359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:51.668 [2024-04-26 15:02:34.247020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.668 [2024-04-26 15:02:34.247038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.668 [2024-04-26 15:02:34.247044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:51.668 [2024-04-26 15:02:34.255175] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.668 [2024-04-26 15:02:34.255193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.668 [2024-04-26 15:02:34.255199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:51.668 [2024-04-26 15:02:34.264730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.668 [2024-04-26 15:02:34.264748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.668 [2024-04-26 15:02:34.264754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:51.668 [2024-04-26 15:02:34.271876] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.668 [2024-04-26 15:02:34.271895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.668 [2024-04-26 15:02:34.271901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:51.668 [2024-04-26 15:02:34.280246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.668 [2024-04-26 15:02:34.280264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.668 [2024-04-26 15:02:34.280270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:51.668 [2024-04-26 15:02:34.290256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.668 [2024-04-26 15:02:34.290274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.668 [2024-04-26 15:02:34.290280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:51.668 [2024-04-26 15:02:34.298540] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.668 [2024-04-26 15:02:34.298558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.668 [2024-04-26 15:02:34.298564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:51.668 [2024-04-26 15:02:34.310301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.668 [2024-04-26 15:02:34.310320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.668 [2024-04-26 15:02:34.310326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:51.668 [2024-04-26 15:02:34.319221] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.668 [2024-04-26 15:02:34.319239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.668 [2024-04-26 15:02:34.319246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:51.668 [2024-04-26 15:02:34.329251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.668 [2024-04-26 15:02:34.329270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.668 [2024-04-26 15:02:34.329276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:51.928 [2024-04-26 15:02:34.338555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.928 [2024-04-26 15:02:34.338573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.928 [2024-04-26 15:02:34.338580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:51.928 [2024-04-26 15:02:34.349123] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.928 [2024-04-26 15:02:34.349141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.928 [2024-04-26 15:02:34.349147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:51.928 [2024-04-26 15:02:34.358069] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.928 [2024-04-26 15:02:34.358087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.928 [2024-04-26 15:02:34.358093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:51.928 [2024-04-26 15:02:34.367020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.928 [2024-04-26 15:02:34.367038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.928 [2024-04-26 15:02:34.367044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:51.928 [2024-04-26 15:02:34.376077] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.928 [2024-04-26 15:02:34.376095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.928 [2024-04-26 15:02:34.376102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:51.928 [2024-04-26 15:02:34.385040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.928 [2024-04-26 15:02:34.385058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.928 [2024-04-26 15:02:34.385068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:51.928 [2024-04-26 15:02:34.393611] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.928 [2024-04-26 15:02:34.393629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.928 [2024-04-26 15:02:34.393635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:51.928 [2024-04-26 15:02:34.401175] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.928 [2024-04-26 15:02:34.401192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.928 [2024-04-26 15:02:34.401198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:51.928 [2024-04-26 15:02:34.410411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.928 [2024-04-26 15:02:34.410429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.928 [2024-04-26 15:02:34.410435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:51.928 [2024-04-26 15:02:34.417964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.928 [2024-04-26 15:02:34.417982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.928 [2024-04-26 15:02:34.417988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:51.928 [2024-04-26 15:02:34.427442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.928 [2024-04-26 15:02:34.427460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.928 [2024-04-26 15:02:34.427466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:51.929 [2024-04-26 15:02:34.436619] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.929 [2024-04-26 15:02:34.436638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.929 [2024-04-26 15:02:34.436644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:51.929 [2024-04-26 15:02:34.445978] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.929 [2024-04-26 15:02:34.445996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.929 [2024-04-26 15:02:34.446002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:51.929 [2024-04-26 15:02:34.455037] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.929 [2024-04-26 15:02:34.455056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.929 [2024-04-26 15:02:34.455062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:51.929 [2024-04-26 15:02:34.463928] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.929 [2024-04-26 15:02:34.463949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.929 [2024-04-26 15:02:34.463955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:51.929 [2024-04-26 15:02:34.472085] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.929 [2024-04-26 15:02:34.472103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.929 [2024-04-26 15:02:34.472109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:51.929 [2024-04-26 15:02:34.481738] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.929 [2024-04-26 15:02:34.481756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.929 [2024-04-26 15:02:34.481763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:51.929 [2024-04-26 15:02:34.491786] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.929 [2024-04-26 15:02:34.491804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.929 [2024-04-26 15:02:34.491810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:51.929 [2024-04-26 15:02:34.500614] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.929 [2024-04-26 15:02:34.500633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.929 [2024-04-26 15:02:34.500639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:51.929 [2024-04-26 15:02:34.510140] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.929 [2024-04-26 15:02:34.510158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.929 [2024-04-26 15:02:34.510165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:51.929 [2024-04-26 15:02:34.519390] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.929 [2024-04-26 15:02:34.519409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.929 [2024-04-26 15:02:34.519415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:51.929 [2024-04-26 15:02:34.527385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.929 [2024-04-26 15:02:34.527403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.929 [2024-04-26 15:02:34.527410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:51.929 [2024-04-26 15:02:34.538491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.929 [2024-04-26 15:02:34.538509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.929 [2024-04-26 15:02:34.538515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:51.929 [2024-04-26 15:02:34.548374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.929 [2024-04-26 15:02:34.548392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.929 [2024-04-26 15:02:34.548398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:51.929 [2024-04-26 15:02:34.557190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.929 [2024-04-26 15:02:34.557208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.929 [2024-04-26 15:02:34.557214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:51.929 [2024-04-26 15:02:34.566280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.929 [2024-04-26 15:02:34.566299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.929 [2024-04-26 15:02:34.566305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:51.929 [2024-04-26 15:02:34.574454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.929 [2024-04-26 15:02:34.574473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.929 [2024-04-26 15:02:34.574479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:51.929 [2024-04-26 15:02:34.583273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.929 [2024-04-26 15:02:34.583291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.929 [2024-04-26 15:02:34.583297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:51.929 [2024-04-26 15:02:34.592491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:51.929 [2024-04-26 15:02:34.592509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.929 [2024-04-26 15:02:34.592515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.190 [2024-04-26 15:02:34.600595] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.190 [2024-04-26 15:02:34.600614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.190 [2024-04-26 15:02:34.600620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.190 [2024-04-26 15:02:34.609016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.190 [2024-04-26 15:02:34.609034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.190 [2024-04-26 15:02:34.609040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.190 [2024-04-26 15:02:34.617926] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.190 [2024-04-26 15:02:34.617944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.190 [2024-04-26 15:02:34.617953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.190 [2024-04-26 15:02:34.631046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.190 [2024-04-26 15:02:34.631063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.190 [2024-04-26 15:02:34.631070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.190 [2024-04-26 15:02:34.641519] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.190 [2024-04-26 15:02:34.641537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.190 [2024-04-26 15:02:34.641543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.190 [2024-04-26 15:02:34.650498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.190 [2024-04-26 15:02:34.650517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.190 [2024-04-26 15:02:34.650524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.190 [2024-04-26 15:02:34.659156] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.190 [2024-04-26 15:02:34.659174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.190 [2024-04-26 15:02:34.659180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.190 [2024-04-26 15:02:34.668217] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.190 [2024-04-26 15:02:34.668235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.190 [2024-04-26 15:02:34.668241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.190 [2024-04-26 15:02:34.679439] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.190 [2024-04-26 15:02:34.679457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.190 [2024-04-26 15:02:34.679463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.190 [2024-04-26 15:02:34.687112] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.190 [2024-04-26 15:02:34.687130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.190 [2024-04-26 15:02:34.687136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.190 [2024-04-26 15:02:34.697570] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.190 [2024-04-26 15:02:34.697588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.190 [2024-04-26 15:02:34.697594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.190 [2024-04-26 15:02:34.707000] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.190 [2024-04-26 15:02:34.707021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.190 [2024-04-26 15:02:34.707027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.190 [2024-04-26 15:02:34.717406] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.190 [2024-04-26 15:02:34.717424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.190 [2024-04-26 15:02:34.717430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.190 [2024-04-26 15:02:34.730245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.190 [2024-04-26 15:02:34.730263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.190 [2024-04-26 15:02:34.730269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.190 [2024-04-26 15:02:34.742115] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.190 [2024-04-26 15:02:34.742133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.190 [2024-04-26 15:02:34.742140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.190 [2024-04-26 15:02:34.751465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.190 [2024-04-26 15:02:34.751484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.190 [2024-04-26 15:02:34.751491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.190 [2024-04-26 15:02:34.759624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.190 [2024-04-26 15:02:34.759642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.190 [2024-04-26 15:02:34.759648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.190 [2024-04-26 15:02:34.769456] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.190 [2024-04-26 15:02:34.769474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.190 [2024-04-26 15:02:34.769480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.190 [2024-04-26 15:02:34.778635] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.190 [2024-04-26 15:02:34.778654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.190 [2024-04-26 15:02:34.778660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.190 [2024-04-26 15:02:34.786932] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.190 [2024-04-26 15:02:34.786949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.190 [2024-04-26 15:02:34.786955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.190 [2024-04-26 15:02:34.795388] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.190 [2024-04-26 15:02:34.795406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.190 [2024-04-26 15:02:34.795413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.190 [2024-04-26 15:02:34.803352] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.190 [2024-04-26 15:02:34.803370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.190 [2024-04-26 15:02:34.803376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.190 [2024-04-26 15:02:34.812206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.190 [2024-04-26 15:02:34.812225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.191 [2024-04-26 15:02:34.812231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.191 [2024-04-26 15:02:34.819680] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.191 [2024-04-26 15:02:34.819699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.191 [2024-04-26 15:02:34.819705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.191 [2024-04-26 15:02:34.829359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.191 [2024-04-26 15:02:34.829377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.191 [2024-04-26 15:02:34.829383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.191 [2024-04-26 15:02:34.838765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.191 [2024-04-26 15:02:34.838783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.191 [2024-04-26 15:02:34.838789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.191 [2024-04-26 15:02:34.848591] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.191 [2024-04-26 15:02:34.848610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.191 [2024-04-26 15:02:34.848616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.451 [2024-04-26 15:02:34.858210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.451 [2024-04-26 15:02:34.858229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.451 [2024-04-26 15:02:34.858235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.451 [2024-04-26 15:02:34.868734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.451 [2024-04-26 15:02:34.868752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.451 [2024-04-26 15:02:34.868762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.451 [2024-04-26 15:02:34.877995] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.451 [2024-04-26 15:02:34.878013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.451 [2024-04-26 15:02:34.878019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.451 [2024-04-26 15:02:34.887924] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.451 [2024-04-26 15:02:34.887943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.451 [2024-04-26 15:02:34.887949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.451 [2024-04-26 15:02:34.897666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.451 [2024-04-26 15:02:34.897684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-04-26 15:02:34.897690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.452 [2024-04-26 15:02:34.907387] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.452 [2024-04-26 15:02:34.907406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-04-26 15:02:34.907412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.452 [2024-04-26 15:02:34.915931] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.452 [2024-04-26 15:02:34.915950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-04-26 15:02:34.915956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.452 [2024-04-26 15:02:34.923641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.452 [2024-04-26 15:02:34.923659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-04-26 15:02:34.923666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.452 [2024-04-26 15:02:34.932142] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.452 [2024-04-26 15:02:34.932161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-04-26 15:02:34.932167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.452 [2024-04-26 15:02:34.941445] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.452 [2024-04-26 15:02:34.941463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-04-26 15:02:34.941470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.452 [2024-04-26 15:02:34.951143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.452 [2024-04-26 15:02:34.951161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-04-26 15:02:34.951167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.452 [2024-04-26 15:02:34.961611] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.452 [2024-04-26 15:02:34.961629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-04-26 15:02:34.961636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.452 [2024-04-26 15:02:34.970202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.452 [2024-04-26 15:02:34.970219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-04-26 15:02:34.970225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.452 [2024-04-26 15:02:34.979788] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.452 [2024-04-26 15:02:34.979806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-04-26 15:02:34.979812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.452 [2024-04-26 15:02:34.989248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.452 [2024-04-26 15:02:34.989267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-04-26 15:02:34.989273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.452 [2024-04-26 15:02:34.997962] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.452 [2024-04-26 15:02:34.997981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-04-26 15:02:34.997987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.452 [2024-04-26 15:02:35.006301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.452 [2024-04-26 15:02:35.006319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-04-26 15:02:35.006325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.452 [2024-04-26 15:02:35.014928] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.452 [2024-04-26 15:02:35.014946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-04-26 15:02:35.014953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.452 [2024-04-26 15:02:35.023254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.452 [2024-04-26 15:02:35.023272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-04-26 15:02:35.023281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.452 [2024-04-26 15:02:35.032478] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.452 [2024-04-26 15:02:35.032496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-04-26 15:02:35.032503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.452 [2024-04-26 15:02:35.042167] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.452 [2024-04-26 15:02:35.042186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-04-26 15:02:35.042192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.452 [2024-04-26 15:02:35.049788] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.452 [2024-04-26 15:02:35.049807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-04-26 15:02:35.049813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.452 [2024-04-26 15:02:35.060610] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.452 [2024-04-26 15:02:35.060628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-04-26 15:02:35.060634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.452 [2024-04-26 15:02:35.068345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.452 [2024-04-26 15:02:35.068364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-04-26 15:02:35.068370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.452 [2024-04-26 15:02:35.077187] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.452 [2024-04-26 15:02:35.077206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-04-26 15:02:35.077212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.452 [2024-04-26 15:02:35.086149] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.452 [2024-04-26 15:02:35.086167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-04-26 15:02:35.086173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.452 [2024-04-26 15:02:35.093631] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.452 [2024-04-26 15:02:35.093649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-04-26 15:02:35.093655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.452 [2024-04-26 15:02:35.102001] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.452 [2024-04-26 15:02:35.102022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-04-26 15:02:35.102028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.452 [2024-04-26 15:02:35.110142] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.452 [2024-04-26 15:02:35.110161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-04-26 15:02:35.110167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.714 [2024-04-26 15:02:35.120210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.714 [2024-04-26 15:02:35.120229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.714 [2024-04-26 15:02:35.120235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.714 [2024-04-26 15:02:35.130708] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.714 [2024-04-26 15:02:35.130727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.714 [2024-04-26 15:02:35.130733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.714 [2024-04-26 15:02:35.138751] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.714 [2024-04-26 15:02:35.138770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.714 [2024-04-26 15:02:35.138776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.714 [2024-04-26 15:02:35.149915] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.714 [2024-04-26 15:02:35.149934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.714 [2024-04-26 15:02:35.149940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.714 [2024-04-26 15:02:35.158945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.714 [2024-04-26 15:02:35.158964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.714 [2024-04-26 15:02:35.158970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.714 [2024-04-26 15:02:35.168732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.714 [2024-04-26 15:02:35.168750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.714 [2024-04-26 15:02:35.168756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.714 [2024-04-26 15:02:35.178155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.714 [2024-04-26 15:02:35.178173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.714 [2024-04-26 15:02:35.178179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.714 [2024-04-26 15:02:35.187844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.714 [2024-04-26 15:02:35.187867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.714 [2024-04-26 15:02:35.187873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.714 [2024-04-26 15:02:35.197807] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.714 [2024-04-26 15:02:35.197826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.714 [2024-04-26 15:02:35.197832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.714 [2024-04-26 15:02:35.206035] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.714 [2024-04-26 15:02:35.206053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.714 [2024-04-26 15:02:35.206059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.714 [2024-04-26 15:02:35.214577] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.714 [2024-04-26 15:02:35.214595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.714 [2024-04-26 15:02:35.214601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.714 [2024-04-26 15:02:35.223354] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.714 [2024-04-26 15:02:35.223373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.714 [2024-04-26 15:02:35.223379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.714 [2024-04-26 15:02:35.232911] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.714 [2024-04-26 15:02:35.232929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.714 [2024-04-26 15:02:35.232935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.714 [2024-04-26 15:02:35.244397] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.714 [2024-04-26 15:02:35.244416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.714 [2024-04-26 15:02:35.244422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.714 [2024-04-26 15:02:35.253612] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.714 [2024-04-26 15:02:35.253630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.714 [2024-04-26 15:02:35.253636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.714 [2024-04-26 15:02:35.261540] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.714 [2024-04-26 15:02:35.261557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.714 [2024-04-26 15:02:35.261569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.714 [2024-04-26 15:02:35.271976] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.714 [2024-04-26 15:02:35.271995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.714 [2024-04-26 15:02:35.272001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.714 [2024-04-26 15:02:35.279771] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.714 [2024-04-26 15:02:35.279789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.714 [2024-04-26 15:02:35.279795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.714 [2024-04-26 15:02:35.289360] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.714 [2024-04-26 15:02:35.289379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.714 [2024-04-26 15:02:35.289385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.714 [2024-04-26 15:02:35.299792] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.714 [2024-04-26 15:02:35.299811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.714 [2024-04-26 15:02:35.299817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.714 [2024-04-26 15:02:35.309751] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.714 [2024-04-26 15:02:35.309769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.714 [2024-04-26 15:02:35.309775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.714 [2024-04-26 15:02:35.318329] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.714 [2024-04-26 15:02:35.318348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.714 [2024-04-26 15:02:35.318354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.714 [2024-04-26 15:02:35.329991] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.714 [2024-04-26 15:02:35.330010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.714 [2024-04-26 15:02:35.330016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.714 [2024-04-26 15:02:35.340321] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.715 [2024-04-26 15:02:35.340339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.715 [2024-04-26 15:02:35.340345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.715 [2024-04-26 15:02:35.349222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.715 [2024-04-26 15:02:35.349243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.715 [2024-04-26 15:02:35.349249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.715 [2024-04-26 15:02:35.359363] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.715 [2024-04-26 15:02:35.359382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.715 [2024-04-26 15:02:35.359388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.715 [2024-04-26 15:02:35.369394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.715 [2024-04-26 15:02:35.369412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.715 [2024-04-26 15:02:35.369418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.715 [2024-04-26 15:02:35.376083] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.715 [2024-04-26 15:02:35.376101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.715 [2024-04-26 15:02:35.376107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.999 [2024-04-26 15:02:35.385129] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.999 [2024-04-26 15:02:35.385147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.999 [2024-04-26 15:02:35.385154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.999 [2024-04-26 15:02:35.393045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.999 [2024-04-26 15:02:35.393064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.999 [2024-04-26 15:02:35.393070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.999 [2024-04-26 15:02:35.403301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.999 [2024-04-26 15:02:35.403320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.999 [2024-04-26 15:02:35.403326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.999 [2024-04-26 15:02:35.413353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.999 [2024-04-26 15:02:35.413372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.999 [2024-04-26 15:02:35.413379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.999 [2024-04-26 15:02:35.422699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.999 [2024-04-26 15:02:35.422718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.999 [2024-04-26 15:02:35.422724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.999 [2024-04-26 15:02:35.433051] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.999 [2024-04-26 15:02:35.433069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.999 [2024-04-26 15:02:35.433075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.999 [2024-04-26 15:02:35.442506] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.999 [2024-04-26 15:02:35.442524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.999 [2024-04-26 15:02:35.442530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.999 [2024-04-26 15:02:35.453603] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.999 [2024-04-26 15:02:35.453621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.999 [2024-04-26 15:02:35.453627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.999 [2024-04-26 15:02:35.462877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.999 [2024-04-26 15:02:35.462895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.999 [2024-04-26 15:02:35.462901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.999 [2024-04-26 15:02:35.472353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.999 [2024-04-26 15:02:35.472372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.999 [2024-04-26 15:02:35.472378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.999 [2024-04-26 15:02:35.481995] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.999 [2024-04-26 15:02:35.482013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.999 [2024-04-26 15:02:35.482020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.999 [2024-04-26 15:02:35.491770] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.999 [2024-04-26 15:02:35.491788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.999 [2024-04-26 15:02:35.491794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.999 [2024-04-26 15:02:35.500051] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.999 [2024-04-26 15:02:35.500069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.999 [2024-04-26 15:02:35.500076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.999 [2024-04-26 15:02:35.509383] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.999 [2024-04-26 15:02:35.509404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.999 [2024-04-26 15:02:35.509410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.999 [2024-04-26 15:02:35.518475] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.999 [2024-04-26 15:02:35.518493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.999 [2024-04-26 15:02:35.518500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.999 [2024-04-26 15:02:35.529889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.999 [2024-04-26 15:02:35.529907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.999 [2024-04-26 15:02:35.529913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.999 [2024-04-26 15:02:35.539094] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.999 [2024-04-26 15:02:35.539112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.999 [2024-04-26 15:02:35.539118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.999 [2024-04-26 15:02:35.549174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.999 [2024-04-26 15:02:35.549193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.999 [2024-04-26 15:02:35.549199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.999 [2024-04-26 15:02:35.558058] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:52.999 [2024-04-26 15:02:35.558076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.999 [2024-04-26 15:02:35.558082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.999 [2024-04-26 15:02:35.567756] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:53.000 [2024-04-26 15:02:35.567774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.000 [2024-04-26 15:02:35.567780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.000 [2024-04-26 15:02:35.576124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:53.000 [2024-04-26 15:02:35.576143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.000 [2024-04-26 15:02:35.576149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.000 [2024-04-26 15:02:35.586234] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:53.000 [2024-04-26 15:02:35.586252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.000 [2024-04-26 15:02:35.586258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.000 [2024-04-26 15:02:35.595626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:53.000 [2024-04-26 15:02:35.595644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.000 [2024-04-26 15:02:35.595650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.000 [2024-04-26 15:02:35.604924] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:53.000 [2024-04-26 15:02:35.604942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.000 [2024-04-26 15:02:35.604948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.000 [2024-04-26 15:02:35.612937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:53.000 [2024-04-26 15:02:35.612955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.000 [2024-04-26 15:02:35.612961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.000 [2024-04-26 15:02:35.621412] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:53.000 [2024-04-26 15:02:35.621430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.000 [2024-04-26 15:02:35.621436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.000 [2024-04-26 15:02:35.631126] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:53.000 [2024-04-26 15:02:35.631144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.000 [2024-04-26 15:02:35.631150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.260 [2024-04-26 15:02:35.640945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:53.260 [2024-04-26 15:02:35.640963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.260 [2024-04-26 15:02:35.640969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.260 [2024-04-26 15:02:35.650325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:53.260 [2024-04-26 15:02:35.650342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.260 [2024-04-26 15:02:35.650348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.260 [2024-04-26 15:02:35.660071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:53.260 [2024-04-26 15:02:35.660089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.260 [2024-04-26 15:02:35.660095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.260 [2024-04-26 15:02:35.670433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:53.260 [2024-04-26 15:02:35.670451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.261 [2024-04-26 15:02:35.670461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.261 [2024-04-26 15:02:35.679615] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:53.261 [2024-04-26 15:02:35.679633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.261 [2024-04-26 15:02:35.679639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.261 [2024-04-26 15:02:35.690799] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:53.261 [2024-04-26 15:02:35.690817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.261 [2024-04-26 15:02:35.690823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.261 [2024-04-26 15:02:35.700565] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:53.261 [2024-04-26 15:02:35.700583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.261 [2024-04-26 15:02:35.700589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.261 [2024-04-26 15:02:35.710502] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:53.261 [2024-04-26 15:02:35.710520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.261 [2024-04-26 15:02:35.710526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.261 [2024-04-26 15:02:35.719951] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:53.261 [2024-04-26 15:02:35.719971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.261 [2024-04-26 15:02:35.719977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.261 [2024-04-26 15:02:35.729733] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:53.261 [2024-04-26 15:02:35.729751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.261 [2024-04-26 15:02:35.729757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.261 [2024-04-26 15:02:35.740987] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:53.261 [2024-04-26 15:02:35.741006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.261 [2024-04-26 15:02:35.741012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.261 [2024-04-26 15:02:35.750434] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:53.261 [2024-04-26 15:02:35.750452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.261 [2024-04-26 15:02:35.750458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.261 [2024-04-26 15:02:35.761391] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0e7c0) 00:25:53.261 [2024-04-26 15:02:35.761411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.261 [2024-04-26 15:02:35.761417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.261 00:25:53.261 Latency(us) 00:25:53.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.261 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:53.261 nvme0n1 : 2.00 3216.97 402.12 0.00 0.00 4971.07 764.59 14090.24 00:25:53.261 =================================================================================================================== 00:25:53.261 Total : 3216.97 402.12 0.00 0.00 4971.07 764.59 14090.24 00:25:53.261 0 00:25:53.261 15:02:35 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:53.261 15:02:35 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:53.261 15:02:35 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:53.261 | .driver_specific 00:25:53.261 | .nvme_error 00:25:53.261 | .status_code 00:25:53.261 | .command_transient_transport_error' 00:25:53.261 15:02:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:53.521 15:02:35 -- host/digest.sh@71 -- # (( 207 > 0 )) 00:25:53.521 15:02:35 -- host/digest.sh@73 -- # killprocess 1216195 00:25:53.521 15:02:35 -- common/autotest_common.sh@936 -- # '[' -z 1216195 ']' 00:25:53.521 15:02:35 -- common/autotest_common.sh@940 -- # kill -0 1216195 00:25:53.521 15:02:35 -- common/autotest_common.sh@941 -- # uname 00:25:53.521 15:02:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:53.521 15:02:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1216195 00:25:53.521 15:02:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:53.521 15:02:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:53.521 15:02:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1216195' 00:25:53.521 killing process with pid 1216195 00:25:53.521 15:02:36 -- common/autotest_common.sh@955 -- # kill 1216195 00:25:53.521 Received shutdown signal, test time was about 2.000000 seconds 00:25:53.521 00:25:53.521 Latency(us) 00:25:53.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.521 =================================================================================================================== 00:25:53.521 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:53.521 15:02:36 -- common/autotest_common.sh@960 -- # wait 1216195 00:25:53.521 15:02:36 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:53.521 15:02:36 -- host/digest.sh@54 -- # local rw bs qd 00:25:53.521 15:02:36 -- host/digest.sh@56 -- # rw=randwrite 00:25:53.521 15:02:36 -- host/digest.sh@56 -- # bs=4096 00:25:53.521 15:02:36 -- host/digest.sh@56 -- # qd=128 00:25:53.521 15:02:36 -- host/digest.sh@58 -- # bperfpid=1216884 00:25:53.521 15:02:36 -- host/digest.sh@60 -- # waitforlisten 1216884 /var/tmp/bperf.sock 00:25:53.521 15:02:36 -- common/autotest_common.sh@817 -- # '[' -z 1216884 ']' 00:25:53.521 15:02:36 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:53.521 15:02:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:53.521 15:02:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:53.521 15:02:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:53.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:53.521 15:02:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:53.521 15:02:36 -- common/autotest_common.sh@10 -- # set +x 00:25:53.521 [2024-04-26 15:02:36.171162] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:25:53.521 [2024-04-26 15:02:36.171220] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1216884 ] 00:25:53.781 EAL: No free 2048 kB hugepages reported on node 1 00:25:53.781 [2024-04-26 15:02:36.248103] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.781 [2024-04-26 15:02:36.301031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.349 15:02:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:54.349 15:02:36 -- common/autotest_common.sh@850 -- # return 0 00:25:54.349 15:02:36 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:54.349 15:02:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:54.608 15:02:37 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:54.608 15:02:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:54.608 15:02:37 -- common/autotest_common.sh@10 -- # set +x 00:25:54.608 15:02:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:54.608 15:02:37 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:54.608 15:02:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:54.867 nvme0n1 00:25:54.867 15:02:37 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:54.867 15:02:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:54.867 15:02:37 -- common/autotest_common.sh@10 -- # set +x 00:25:54.867 15:02:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:54.867 15:02:37 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:54.867 15:02:37 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:55.127 Running I/O for 2 seconds... 00:25:55.127 [2024-04-26 15:02:37.559464] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190eb760 00:25:55.127 [2024-04-26 15:02:37.561326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.127 [2024-04-26 15:02:37.561352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:55.127 [2024-04-26 15:02:37.570121] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190eaef0 00:25:55.127 [2024-04-26 15:02:37.571255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.127 [2024-04-26 15:02:37.571271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:55.127 [2024-04-26 15:02:37.584013] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190fdeb0 00:25:55.127 [2024-04-26 15:02:37.585819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.127 [2024-04-26 15:02:37.585835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:55.127 [2024-04-26 15:02:37.594727] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190eff18 00:25:55.127 [2024-04-26 15:02:37.595854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.127 [2024-04-26 15:02:37.595869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:55.128 [2024-04-26 15:02:37.608496] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f81e0 00:25:55.128 [2024-04-26 15:02:37.610313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.128 [2024-04-26 15:02:37.610329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:55.128 [2024-04-26 15:02:37.618371] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190ff3c8 00:25:55.128 [2024-04-26 15:02:37.619481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.128 [2024-04-26 15:02:37.619496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:55.128 [2024-04-26 15:02:37.632833] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190eb760 00:25:55.128 [2024-04-26 15:02:37.634641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.128 [2024-04-26 15:02:37.634657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:55.128 [2024-04-26 15:02:37.643844] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e6fa8 00:25:55.128 [2024-04-26 15:02:37.645135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.128 [2024-04-26 15:02:37.645151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:55.128 [2024-04-26 15:02:37.655601] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190fe720 00:25:55.128 [2024-04-26 15:02:37.656709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.128 [2024-04-26 15:02:37.656725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:55.128 [2024-04-26 15:02:37.667804] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e27f0 00:25:55.128 [2024-04-26 15:02:37.668882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.128 [2024-04-26 15:02:37.668898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:55.128 [2024-04-26 15:02:37.681468] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190fe720 00:25:55.128 [2024-04-26 15:02:37.683251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.128 [2024-04-26 15:02:37.683267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:55.128 [2024-04-26 15:02:37.692094] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e95a0 00:25:55.128 [2024-04-26 15:02:37.693155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.128 [2024-04-26 15:02:37.693171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:55.128 [2024-04-26 15:02:37.704244] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e0a68 00:25:55.128 [2024-04-26 15:02:37.705315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.128 [2024-04-26 15:02:37.705331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:55.128 [2024-04-26 15:02:37.717903] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e95a0 00:25:55.128 [2024-04-26 15:02:37.719667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.128 [2024-04-26 15:02:37.719683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:55.128 [2024-04-26 15:02:37.730030] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f1430 00:25:55.128 [2024-04-26 15:02:37.731778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.128 [2024-04-26 15:02:37.731794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:55.128 [2024-04-26 15:02:37.739957] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e0a68 00:25:55.128 [2024-04-26 15:02:37.741012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.128 [2024-04-26 15:02:37.741028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:55.128 [2024-04-26 15:02:37.754661] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e7c50 00:25:55.128 [2024-04-26 15:02:37.756420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.128 [2024-04-26 15:02:37.756436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:55.128 [2024-04-26 15:02:37.764857] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f0788 00:25:55.128 [2024-04-26 15:02:37.766072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.128 [2024-04-26 15:02:37.766087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:55.128 [2024-04-26 15:02:37.779259] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f6cc8 00:25:55.128 [2024-04-26 15:02:37.781185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.128 [2024-04-26 15:02:37.781201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:55.128 [2024-04-26 15:02:37.789807] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f7538 00:25:55.128 [2024-04-26 15:02:37.791049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.128 [2024-04-26 15:02:37.791065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.387 [2024-04-26 15:02:37.803415] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f0ff8 00:25:55.387 [2024-04-26 15:02:37.805297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.387 [2024-04-26 15:02:37.805313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.387 [2024-04-26 15:02:37.813212] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f0350 00:25:55.387 [2024-04-26 15:02:37.814392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.387 [2024-04-26 15:02:37.814410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:55.387 [2024-04-26 15:02:37.827649] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f8618 00:25:55.387 [2024-04-26 15:02:37.829522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.387 [2024-04-26 15:02:37.829537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:55.387 [2024-04-26 15:02:37.839699] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f8e88 00:25:55.387 [2024-04-26 15:02:37.841552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.387 [2024-04-26 15:02:37.841568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:55.387 [2024-04-26 15:02:37.849543] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f6890 00:25:55.387 [2024-04-26 15:02:37.850652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.387 [2024-04-26 15:02:37.850667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:55.388 [2024-04-26 15:02:37.863997] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190efae0 00:25:55.388 [2024-04-26 15:02:37.865835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.388 [2024-04-26 15:02:37.865853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:55.388 [2024-04-26 15:02:37.873789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190fb480 00:25:55.388 [2024-04-26 15:02:37.874907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.388 [2024-04-26 15:02:37.874922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:55.388 [2024-04-26 15:02:37.886142] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f6020 00:25:55.388 [2024-04-26 15:02:37.887269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.388 [2024-04-26 15:02:37.887284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:55.388 [2024-04-26 15:02:37.899077] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190eea00 00:25:55.388 [2024-04-26 15:02:37.900234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.388 [2024-04-26 15:02:37.900250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:55.388 [2024-04-26 15:02:37.912756] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f96f8 00:25:55.388 [2024-04-26 15:02:37.914583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.388 [2024-04-26 15:02:37.914598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:55.388 [2024-04-26 15:02:37.922609] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f6020 00:25:55.388 [2024-04-26 15:02:37.923733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.388 [2024-04-26 15:02:37.923751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:55.388 [2024-04-26 15:02:37.935547] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190eea00 00:25:55.388 [2024-04-26 15:02:37.936703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.388 [2024-04-26 15:02:37.936718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:55.388 [2024-04-26 15:02:37.947667] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f6020 00:25:55.388 [2024-04-26 15:02:37.948759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.388 [2024-04-26 15:02:37.948774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:55.388 [2024-04-26 15:02:37.959842] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f1868 00:25:55.388 [2024-04-26 15:02:37.960932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.388 [2024-04-26 15:02:37.960947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:55.388 [2024-04-26 15:02:37.973474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f6020 00:25:55.388 [2024-04-26 15:02:37.975292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.388 [2024-04-26 15:02:37.975306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:55.388 [2024-04-26 15:02:37.984485] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f4f40 00:25:55.388 [2024-04-26 15:02:37.985769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.388 [2024-04-26 15:02:37.985785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:55.388 [2024-04-26 15:02:37.998357] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f1430 00:25:55.388 [2024-04-26 15:02:38.000340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.388 [2024-04-26 15:02:38.000355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:55.388 [2024-04-26 15:02:38.008193] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190eb760 00:25:55.388 [2024-04-26 15:02:38.009431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.388 [2024-04-26 15:02:38.009446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:55.388 [2024-04-26 15:02:38.020282] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190de038 00:25:55.388 [2024-04-26 15:02:38.021540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.388 [2024-04-26 15:02:38.021555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:55.388 [2024-04-26 15:02:38.034663] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190eaef0 00:25:55.388 [2024-04-26 15:02:38.036618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.388 [2024-04-26 15:02:38.036633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:55.388 [2024-04-26 15:02:38.046823] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e6fa8 00:25:55.388 [2024-04-26 15:02:38.048782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.388 [2024-04-26 15:02:38.048797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.648 [2024-04-26 15:02:38.056677] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190de038 00:25:55.648 [2024-04-26 15:02:38.057906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.648 [2024-04-26 15:02:38.057921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:55.648 [2024-04-26 15:02:38.069617] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190fe720 00:25:55.648 [2024-04-26 15:02:38.070864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.648 [2024-04-26 15:02:38.070886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:55.648 [2024-04-26 15:02:38.081169] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e99d8 00:25:55.648 [2024-04-26 15:02:38.082421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.648 [2024-04-26 15:02:38.082436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:55.648 [2024-04-26 15:02:38.094276] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190ddc00 00:25:55.648 [2024-04-26 15:02:38.095699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.648 [2024-04-26 15:02:38.095715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:55.648 [2024-04-26 15:02:38.105785] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190eaef0 00:25:55.648 [2024-04-26 15:02:38.107179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.648 [2024-04-26 15:02:38.107194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:55.648 [2024-04-26 15:02:38.120294] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190ebfd0 00:25:55.648 [2024-04-26 15:02:38.122415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.648 [2024-04-26 15:02:38.122431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:55.648 [2024-04-26 15:02:38.130122] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e5658 00:25:55.648 [2024-04-26 15:02:38.131541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.648 [2024-04-26 15:02:38.131556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:55.648 [2024-04-26 15:02:38.143082] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190df550 00:25:55.648 [2024-04-26 15:02:38.144509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.648 [2024-04-26 15:02:38.144524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:55.648 [2024-04-26 15:02:38.156809] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190edd58 00:25:55.648 [2024-04-26 15:02:38.158904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.648 [2024-04-26 15:02:38.158919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:55.648 [2024-04-26 15:02:38.166629] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190fda78 00:25:55.648 [2024-04-26 15:02:38.168014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.648 [2024-04-26 15:02:38.168029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:55.648 [2024-04-26 15:02:38.179549] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f7da8 00:25:55.648 [2024-04-26 15:02:38.181003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.648 [2024-04-26 15:02:38.181020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:55.648 [2024-04-26 15:02:38.193265] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190edd58 00:25:55.648 [2024-04-26 15:02:38.195347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.648 [2024-04-26 15:02:38.195362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:55.648 [2024-04-26 15:02:38.203069] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190fda78 00:25:55.648 [2024-04-26 15:02:38.204486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.648 [2024-04-26 15:02:38.204501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:55.648 [2024-04-26 15:02:38.215948] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190eaab8 00:25:55.648 [2024-04-26 15:02:38.217359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.648 [2024-04-26 15:02:38.217375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:55.648 [2024-04-26 15:02:38.229607] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190fda78 00:25:55.648 [2024-04-26 15:02:38.231715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.648 [2024-04-26 15:02:38.231730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:55.648 [2024-04-26 15:02:38.239450] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190df550 00:25:55.648 [2024-04-26 15:02:38.240845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.648 [2024-04-26 15:02:38.240862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:55.648 [2024-04-26 15:02:38.252351] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190ef270 00:25:55.648 [2024-04-26 15:02:38.253740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.649 [2024-04-26 15:02:38.253755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:55.649 [2024-04-26 15:02:38.266019] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190df550 00:25:55.649 [2024-04-26 15:02:38.268121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.649 [2024-04-26 15:02:38.268136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:55.649 [2024-04-26 15:02:38.278125] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190fac10 00:25:55.649 [2024-04-26 15:02:38.280203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.649 [2024-04-26 15:02:38.280218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:55.649 [2024-04-26 15:02:38.290234] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e5658 00:25:55.649 [2024-04-26 15:02:38.292290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.649 [2024-04-26 15:02:38.292305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:55.649 [2024-04-26 15:02:38.300829] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f9b30 00:25:55.649 [2024-04-26 15:02:38.302207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.649 [2024-04-26 15:02:38.302222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:55.909 [2024-04-26 15:02:38.314533] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190ff3c8 00:25:55.909 [2024-04-26 15:02:38.316590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.909 [2024-04-26 15:02:38.316605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:55.909 [2024-04-26 15:02:38.326667] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190ef270 00:25:55.909 [2024-04-26 15:02:38.328724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.909 [2024-04-26 15:02:38.328739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:55.909 [2024-04-26 15:02:38.338768] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e1b48 00:25:55.909 [2024-04-26 15:02:38.340801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.909 [2024-04-26 15:02:38.340817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:55.909 [2024-04-26 15:02:38.348622] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190ed4e8 00:25:55.909 [2024-04-26 15:02:38.349958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.909 [2024-04-26 15:02:38.349973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:55.909 [2024-04-26 15:02:38.361577] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f9b30 00:25:55.909 [2024-04-26 15:02:38.362915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.909 [2024-04-26 15:02:38.362930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:55.909 [2024-04-26 15:02:38.372878] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f92c0 00:25:55.909 [2024-04-26 15:02:38.374196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.909 [2024-04-26 15:02:38.374211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:55.909 [2024-04-26 15:02:38.384994] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e2c28 00:25:55.909 [2024-04-26 15:02:38.386302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.909 [2024-04-26 15:02:38.386317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:55.909 [2024-04-26 15:02:38.397926] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f35f0 00:25:55.909 [2024-04-26 15:02:38.399270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.909 [2024-04-26 15:02:38.399286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:55.909 [2024-04-26 15:02:38.410161] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190ed4e8 00:25:55.909 [2024-04-26 15:02:38.411468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.909 [2024-04-26 15:02:38.411483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:55.909 [2024-04-26 15:02:38.423843] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f35f0 00:25:55.909 [2024-04-26 15:02:38.425853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.909 [2024-04-26 15:02:38.425871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:55.909 [2024-04-26 15:02:38.435971] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e3d08 00:25:55.909 [2024-04-26 15:02:38.437946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.909 [2024-04-26 15:02:38.437961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:55.909 [2024-04-26 15:02:38.445896] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190ed4e8 00:25:55.909 [2024-04-26 15:02:38.447150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.909 [2024-04-26 15:02:38.447165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:55.909 [2024-04-26 15:02:38.458018] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f46d0 00:25:55.909 [2024-04-26 15:02:38.459301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.909 [2024-04-26 15:02:38.459316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:55.909 [2024-04-26 15:02:38.470963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f57b0 00:25:55.909 [2024-04-26 15:02:38.472277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.909 [2024-04-26 15:02:38.472292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:55.909 [2024-04-26 15:02:38.483085] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f46d0 00:25:55.909 [2024-04-26 15:02:38.484371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.909 [2024-04-26 15:02:38.484386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:55.909 [2024-04-26 15:02:38.494408] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190ddc00 00:25:55.909 [2024-04-26 15:02:38.495659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.909 [2024-04-26 15:02:38.495675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:55.909 [2024-04-26 15:02:38.507348] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e6738 00:25:55.909 [2024-04-26 15:02:38.508622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.909 [2024-04-26 15:02:38.508637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.909 [2024-04-26 15:02:38.521088] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e0630 00:25:55.909 [2024-04-26 15:02:38.523037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.909 [2024-04-26 15:02:38.523052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:55.909 [2024-04-26 15:02:38.531631] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f8a50 00:25:55.909 [2024-04-26 15:02:38.532902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.909 [2024-04-26 15:02:38.532917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:55.909 [2024-04-26 15:02:38.542963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f6890 00:25:55.909 [2024-04-26 15:02:38.544199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.909 [2024-04-26 15:02:38.544215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:55.909 [2024-04-26 15:02:38.557409] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e6fa8 00:25:55.909 [2024-04-26 15:02:38.559342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.909 [2024-04-26 15:02:38.559360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:55.909 [2024-04-26 15:02:38.567242] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e73e0 00:25:55.909 [2024-04-26 15:02:38.568469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.909 [2024-04-26 15:02:38.568484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:56.171 [2024-04-26 15:02:38.580193] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f6890 00:25:56.171 [2024-04-26 15:02:38.581426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.171 [2024-04-26 15:02:38.581443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:56.171 [2024-04-26 15:02:38.593914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e6fa8 00:25:56.171 [2024-04-26 15:02:38.595842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.171 [2024-04-26 15:02:38.595857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:56.171 [2024-04-26 15:02:38.603753] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e73e0 00:25:56.171 [2024-04-26 15:02:38.604971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.171 [2024-04-26 15:02:38.604986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:56.171 [2024-04-26 15:02:38.615894] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190fdeb0 00:25:56.171 [2024-04-26 15:02:38.617105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.171 [2024-04-26 15:02:38.617121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:56.171 [2024-04-26 15:02:38.628766] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190ed4e8 00:25:56.171 [2024-04-26 15:02:38.629967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.171 [2024-04-26 15:02:38.629983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:56.171 [2024-04-26 15:02:38.640889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190fdeb0 00:25:56.171 [2024-04-26 15:02:38.642096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.171 [2024-04-26 15:02:38.642112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.171 [2024-04-26 15:02:38.654595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190ed4e8 00:25:56.171 [2024-04-26 15:02:38.656482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.171 [2024-04-26 15:02:38.656497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.171 [2024-04-26 15:02:38.666631] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e73e0 00:25:56.171 [2024-04-26 15:02:38.668493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.171 [2024-04-26 15:02:38.668509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:56.171 [2024-04-26 15:02:38.676477] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190df118 00:25:56.171 [2024-04-26 15:02:38.677630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.171 [2024-04-26 15:02:38.677646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:56.171 [2024-04-26 15:02:38.689417] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f6458 00:25:56.171 [2024-04-26 15:02:38.690596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.171 [2024-04-26 15:02:38.690612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:56.171 [2024-04-26 15:02:38.703118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e73e0 00:25:56.171 [2024-04-26 15:02:38.704979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.171 [2024-04-26 15:02:38.704995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:56.171 [2024-04-26 15:02:38.712961] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190df118 00:25:56.171 [2024-04-26 15:02:38.714077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.171 [2024-04-26 15:02:38.714092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:56.171 [2024-04-26 15:02:38.725108] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f4298 00:25:56.171 [2024-04-26 15:02:38.726250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.171 [2024-04-26 15:02:38.726266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:56.171 [2024-04-26 15:02:38.738028] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e9168 00:25:56.171 [2024-04-26 15:02:38.739195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.171 [2024-04-26 15:02:38.739210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:56.171 [2024-04-26 15:02:38.750308] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f6458 00:25:56.171 [2024-04-26 15:02:38.751444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.171 [2024-04-26 15:02:38.751460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:56.171 [2024-04-26 15:02:38.763974] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e9168 00:25:56.171 [2024-04-26 15:02:38.765808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.171 [2024-04-26 15:02:38.765823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:56.171 [2024-04-26 15:02:38.773808] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190df118 00:25:56.171 [2024-04-26 15:02:38.774904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.171 [2024-04-26 15:02:38.774919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:56.171 [2024-04-26 15:02:38.785934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f31b8 00:25:56.171 [2024-04-26 15:02:38.787064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.171 [2024-04-26 15:02:38.787080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:56.171 [2024-04-26 15:02:38.800608] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e95a0 00:25:56.171 [2024-04-26 15:02:38.802407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.171 [2024-04-26 15:02:38.802423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:56.171 [2024-04-26 15:02:38.810827] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190ea248 00:25:56.171 [2024-04-26 15:02:38.812139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.171 [2024-04-26 15:02:38.812155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:56.171 [2024-04-26 15:02:38.823801] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e4578 00:25:56.171 [2024-04-26 15:02:38.825134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.171 [2024-04-26 15:02:38.825150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:56.433 [2024-04-26 15:02:38.837506] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190fdeb0 00:25:56.433 [2024-04-26 15:02:38.839514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.433 [2024-04-26 15:02:38.839530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:56.433 [2024-04-26 15:02:38.848087] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f6458 00:25:56.433 [2024-04-26 15:02:38.849360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.433 [2024-04-26 15:02:38.849376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:56.433 [2024-04-26 15:02:38.861890] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e4578 00:25:56.433 [2024-04-26 15:02:38.863881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.433 [2024-04-26 15:02:38.863896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:56.433 [2024-04-26 15:02:38.871729] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e5658 00:25:56.433 [2024-04-26 15:02:38.873034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.433 [2024-04-26 15:02:38.873052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:56.433 [2024-04-26 15:02:38.886341] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190ea248 00:25:56.433 [2024-04-26 15:02:38.888331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.433 [2024-04-26 15:02:38.888346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:56.433 [2024-04-26 15:02:38.896166] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f8618 00:25:56.433 [2024-04-26 15:02:38.897325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.433 [2024-04-26 15:02:38.897340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:56.433 [2024-04-26 15:02:38.908210] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e0a68 00:25:56.433 [2024-04-26 15:02:38.909331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.433 [2024-04-26 15:02:38.909346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:56.433 [2024-04-26 15:02:38.922692] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f4f40 00:25:56.433 [2024-04-26 15:02:38.924655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.433 [2024-04-26 15:02:38.924670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:56.433 [2024-04-26 15:02:38.933239] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e6738 00:25:56.433 [2024-04-26 15:02:38.934493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.433 [2024-04-26 15:02:38.934509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:56.433 [2024-04-26 15:02:38.945424] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e0a68 00:25:56.433 [2024-04-26 15:02:38.946661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.433 [2024-04-26 15:02:38.946677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:56.433 [2024-04-26 15:02:38.957597] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e73e0 00:25:56.433 [2024-04-26 15:02:38.958866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.433 [2024-04-26 15:02:38.958882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:56.433 [2024-04-26 15:02:38.971197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e0a68 00:25:56.433 [2024-04-26 15:02:38.973131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.433 [2024-04-26 15:02:38.973146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:56.433 [2024-04-26 15:02:38.983242] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f8e88 00:25:56.433 [2024-04-26 15:02:38.985148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.433 [2024-04-26 15:02:38.985163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.434 [2024-04-26 15:02:38.993079] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e0630 00:25:56.434 [2024-04-26 15:02:38.994270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.434 [2024-04-26 15:02:38.994286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.434 [2024-04-26 15:02:39.006004] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e6fa8 00:25:56.434 [2024-04-26 15:02:39.007220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.434 [2024-04-26 15:02:39.007235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.434 [2024-04-26 15:02:39.019716] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f8e88 00:25:56.434 [2024-04-26 15:02:39.021616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.434 [2024-04-26 15:02:39.021631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.434 [2024-04-26 15:02:39.029540] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e0630 00:25:56.434 [2024-04-26 15:02:39.030730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.434 [2024-04-26 15:02:39.030745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.434 [2024-04-26 15:02:39.042426] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e6b70 00:25:56.434 [2024-04-26 15:02:39.043604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.434 [2024-04-26 15:02:39.043619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.434 [2024-04-26 15:02:39.056170] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e0630 00:25:56.434 [2024-04-26 15:02:39.058056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.434 [2024-04-26 15:02:39.058071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:56.434 [2024-04-26 15:02:39.065996] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f9f68 00:25:56.434 [2024-04-26 15:02:39.067122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.434 [2024-04-26 15:02:39.067137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:56.434 [2024-04-26 15:02:39.078933] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e6b70 00:25:56.434 [2024-04-26 15:02:39.080130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.434 [2024-04-26 15:02:39.080146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:56.434 [2024-04-26 15:02:39.092639] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e0630 00:25:56.434 [2024-04-26 15:02:39.094495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.434 [2024-04-26 15:02:39.094511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:56.693 [2024-04-26 15:02:39.102453] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f9f68 00:25:56.693 [2024-04-26 15:02:39.103614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.693 [2024-04-26 15:02:39.103629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:56.693 [2024-04-26 15:02:39.115418] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e6b70 00:25:56.693 [2024-04-26 15:02:39.116564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.693 [2024-04-26 15:02:39.116579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:56.693 [2024-04-26 15:02:39.129119] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e0630 00:25:56.693 [2024-04-26 15:02:39.130991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.693 [2024-04-26 15:02:39.131006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:56.693 [2024-04-26 15:02:39.138953] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f9f68 00:25:56.693 [2024-04-26 15:02:39.140115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.693 [2024-04-26 15:02:39.140130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:56.693 [2024-04-26 15:02:39.151897] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e6b70 00:25:56.693 [2024-04-26 15:02:39.153119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.693 [2024-04-26 15:02:39.153134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:56.693 [2024-04-26 15:02:39.163961] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f9f68 00:25:56.693 [2024-04-26 15:02:39.165093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.693 [2024-04-26 15:02:39.165108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:56.693 [2024-04-26 15:02:39.177673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e8d30 00:25:56.693 [2024-04-26 15:02:39.179536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.693 [2024-04-26 15:02:39.179551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:56.693 [2024-04-26 15:02:39.189713] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e88f8 00:25:56.693 [2024-04-26 15:02:39.191548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.693 [2024-04-26 15:02:39.191566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:56.693 [2024-04-26 15:02:39.201813] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190fef90 00:25:56.693 [2024-04-26 15:02:39.203637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.693 [2024-04-26 15:02:39.203651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:56.693 [2024-04-26 15:02:39.211623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190fa7d8 00:25:56.693 [2024-04-26 15:02:39.212746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.693 [2024-04-26 15:02:39.212761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:56.693 [2024-04-26 15:02:39.226032] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e7818 00:25:56.693 [2024-04-26 15:02:39.227849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.693 [2024-04-26 15:02:39.227864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:56.693 [2024-04-26 15:02:39.235847] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e4140 00:25:56.693 [2024-04-26 15:02:39.236960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.694 [2024-04-26 15:02:39.236975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:56.694 [2024-04-26 15:02:39.248800] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190fa7d8 00:25:56.694 [2024-04-26 15:02:39.249932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.694 [2024-04-26 15:02:39.249948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:56.694 [2024-04-26 15:02:39.262477] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e7818 00:25:56.694 [2024-04-26 15:02:39.264299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.694 [2024-04-26 15:02:39.264314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:56.694 [2024-04-26 15:02:39.273090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190de038 00:25:56.694 [2024-04-26 15:02:39.274193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.694 [2024-04-26 15:02:39.274208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:56.694 [2024-04-26 15:02:39.286772] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f5be8 00:25:56.694 [2024-04-26 15:02:39.288590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.694 [2024-04-26 15:02:39.288605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:56.694 [2024-04-26 15:02:39.297296] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e4140 00:25:56.694 [2024-04-26 15:02:39.298430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.694 [2024-04-26 15:02:39.298446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:56.694 [2024-04-26 15:02:39.309428] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190fa7d8 00:25:56.694 [2024-04-26 15:02:39.310538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.694 [2024-04-26 15:02:39.310554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:56.694 [2024-04-26 15:02:39.320755] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e95a0 00:25:56.694 [2024-04-26 15:02:39.321828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.694 [2024-04-26 15:02:39.321846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:56.694 [2024-04-26 15:02:39.335173] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e3060 00:25:56.694 [2024-04-26 15:02:39.336902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.694 [2024-04-26 15:02:39.336917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:56.694 [2024-04-26 15:02:39.347314] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e9e10 00:25:56.694 [2024-04-26 15:02:39.349090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.694 [2024-04-26 15:02:39.349105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:56.694 [2024-04-26 15:02:39.357135] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190fc128 00:25:56.694 [2024-04-26 15:02:39.358198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.694 [2024-04-26 15:02:39.358213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:56.954 [2024-04-26 15:02:39.370069] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e5220 00:25:56.954 [2024-04-26 15:02:39.371146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.954 [2024-04-26 15:02:39.371162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:56.954 [2024-04-26 15:02:39.383817] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e9e10 00:25:56.954 [2024-04-26 15:02:39.385578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.954 [2024-04-26 15:02:39.385593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:56.954 [2024-04-26 15:02:39.395930] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e1f80 00:25:56.954 [2024-04-26 15:02:39.397681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.954 [2024-04-26 15:02:39.397696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:56.954 [2024-04-26 15:02:39.406474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190e1710 00:25:56.954 [2024-04-26 15:02:39.407537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.954 [2024-04-26 15:02:39.407553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:56.954 [2024-04-26 15:02:39.418578] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f0350 00:25:56.954 [2024-04-26 15:02:39.419622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.954 [2024-04-26 15:02:39.419637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:56.954 [2024-04-26 15:02:39.432255] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190ed0b0 00:25:56.954 [2024-04-26 15:02:39.433985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.954 [2024-04-26 15:02:39.434000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:56.954 [2024-04-26 15:02:39.442111] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f6cc8 00:25:56.954 [2024-04-26 15:02:39.443136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.954 [2024-04-26 15:02:39.443151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:56.955 [2024-04-26 15:02:39.455072] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f0350 00:25:56.955 [2024-04-26 15:02:39.456090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.955 [2024-04-26 15:02:39.456105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:56.955 [2024-04-26 15:02:39.468725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190ed0b0 00:25:56.955 [2024-04-26 15:02:39.470408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.955 [2024-04-26 15:02:39.470423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:56.955 [2024-04-26 15:02:39.478541] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f6cc8 00:25:56.955 [2024-04-26 15:02:39.479564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.955 [2024-04-26 15:02:39.479579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:56.955 [2024-04-26 15:02:39.492958] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f0350 00:25:56.955 [2024-04-26 15:02:39.494674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.955 [2024-04-26 15:02:39.494689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:56.955 [2024-04-26 15:02:39.503503] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f81e0 00:25:56.955 [2024-04-26 15:02:39.504530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.955 [2024-04-26 15:02:39.504548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:56.955 [2024-04-26 15:02:39.514831] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190eee38 00:25:56.955 [2024-04-26 15:02:39.515823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.955 [2024-04-26 15:02:39.515841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:56.955 [2024-04-26 15:02:39.527777] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190fd640 00:25:56.955 [2024-04-26 15:02:39.528793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.955 [2024-04-26 15:02:39.528808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:56.955 [2024-04-26 15:02:39.541458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c370) with pdu=0x2000190f20d8 00:25:56.955 [2024-04-26 15:02:39.543165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.955 [2024-04-26 15:02:39.543180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:56.955 00:25:56.955 Latency(us) 00:25:56.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.955 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:56.955 nvme0n1 : 2.00 20938.84 81.79 0.00 0.00 6105.23 2225.49 14527.15 00:25:56.955 =================================================================================================================== 00:25:56.955 Total : 20938.84 81.79 0.00 0.00 6105.23 2225.49 14527.15 00:25:56.955 0 00:25:56.955 15:02:39 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:56.955 15:02:39 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:56.955 15:02:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:56.955 15:02:39 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:56.955 | .driver_specific 00:25:56.955 | .nvme_error 00:25:56.955 | .status_code 00:25:56.955 | .command_transient_transport_error' 00:25:57.215 15:02:39 -- host/digest.sh@71 -- # (( 164 > 0 )) 00:25:57.215 15:02:39 -- host/digest.sh@73 -- # killprocess 1216884 00:25:57.215 15:02:39 -- common/autotest_common.sh@936 -- # '[' -z 1216884 ']' 00:25:57.215 15:02:39 -- common/autotest_common.sh@940 -- # kill -0 1216884 00:25:57.215 15:02:39 -- common/autotest_common.sh@941 -- # uname 00:25:57.215 15:02:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:57.215 15:02:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1216884 00:25:57.215 15:02:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:57.216 15:02:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:57.216 15:02:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1216884' 00:25:57.216 killing process with pid 1216884 00:25:57.216 15:02:39 -- common/autotest_common.sh@955 -- # kill 1216884 00:25:57.216 Received shutdown signal, test time was about 2.000000 seconds 00:25:57.216 00:25:57.216 Latency(us) 00:25:57.216 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:57.216 =================================================================================================================== 00:25:57.216 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:57.216 15:02:39 -- common/autotest_common.sh@960 -- # wait 1216884 00:25:57.476 15:02:39 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:57.476 15:02:39 -- host/digest.sh@54 -- # local rw bs qd 00:25:57.476 15:02:39 -- host/digest.sh@56 -- # rw=randwrite 00:25:57.476 15:02:39 -- host/digest.sh@56 -- # bs=131072 00:25:57.476 15:02:39 -- host/digest.sh@56 -- # qd=16 00:25:57.476 15:02:39 -- host/digest.sh@58 -- # bperfpid=1217569 00:25:57.476 15:02:39 -- host/digest.sh@60 -- # waitforlisten 1217569 /var/tmp/bperf.sock 00:25:57.476 15:02:39 -- common/autotest_common.sh@817 -- # '[' -z 1217569 ']' 00:25:57.476 15:02:39 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:57.476 15:02:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:57.476 15:02:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:57.476 15:02:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:57.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:57.476 15:02:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:57.476 15:02:39 -- common/autotest_common.sh@10 -- # set +x 00:25:57.476 [2024-04-26 15:02:39.948582] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:25:57.476 [2024-04-26 15:02:39.948634] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1217569 ] 00:25:57.476 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:57.476 Zero copy mechanism will not be used. 00:25:57.476 EAL: No free 2048 kB hugepages reported on node 1 00:25:57.476 [2024-04-26 15:02:40.022947] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.476 [2024-04-26 15:02:40.075426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.417 15:02:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:58.417 15:02:40 -- common/autotest_common.sh@850 -- # return 0 00:25:58.417 15:02:40 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:58.417 15:02:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:58.417 15:02:40 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:58.417 15:02:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:58.417 15:02:40 -- common/autotest_common.sh@10 -- # set +x 00:25:58.417 15:02:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:58.417 15:02:40 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:58.417 15:02:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:58.678 nvme0n1 00:25:58.678 15:02:41 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:58.678 15:02:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:58.678 15:02:41 -- common/autotest_common.sh@10 -- # set +x 00:25:58.678 15:02:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:58.678 15:02:41 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:58.678 15:02:41 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:58.678 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:58.678 Zero copy mechanism will not be used. 00:25:58.678 Running I/O for 2 seconds... 00:25:58.678 [2024-04-26 15:02:41.336017] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.678 [2024-04-26 15:02:41.336366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-04-26 15:02:41.336393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.678 [2024-04-26 15:02:41.342771] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.678 [2024-04-26 15:02:41.343009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-04-26 15:02:41.343029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.938 [2024-04-26 15:02:41.351409] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.938 [2024-04-26 15:02:41.351756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.938 [2024-04-26 15:02:41.351774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.938 [2024-04-26 15:02:41.358942] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.938 [2024-04-26 15:02:41.359298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.938 [2024-04-26 15:02:41.359316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.938 [2024-04-26 15:02:41.365773] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.938 [2024-04-26 15:02:41.366097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.366114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.372579] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.372918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.372936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.381243] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.381568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.381585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.390189] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.390620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.390638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.398135] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.398466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.398483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.405485] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.405814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.405831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.413407] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.413733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.413750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.420641] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.420974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.420992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.428620] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.428973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.428991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.434584] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.434669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.434684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.444034] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.444374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.444391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.451867] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.452212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.452229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.459810] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.460077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.460094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.467603] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.467936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.467953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.476830] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.477167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.477187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.484696] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.485027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.485044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.494250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.494473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.494489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.502048] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.502408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.502425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.510799] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.511137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.511154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.517947] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.518271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.518287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.525518] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.525847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.525863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.532394] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.532607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.532623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.538937] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.539259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.539276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.548232] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.548575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.548592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.555905] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.556261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.556278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.563954] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.564273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.564289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.572668] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.573011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.573028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.581346] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.581694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.581711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.588988] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.589340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.589356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.939 [2024-04-26 15:02:41.598893] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:58.939 [2024-04-26 15:02:41.599245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.939 [2024-04-26 15:02:41.599262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.608289] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.608631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.608647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.614811] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.615145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.615162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.622716] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.623019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.623037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.632964] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.633305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.633321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.640796] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.641127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.641144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.649589] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.649914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.649931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.657782] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.658024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.658040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.669553] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.669793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.669809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.679826] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.680167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.680184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.689693] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.690022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.690038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.697447] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.697795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.697815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.704259] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.704471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.704486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.711255] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.711678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.711695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.716454] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.716778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.716794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.722384] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.722603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.722619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.730320] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.730653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.730670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.739114] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.739431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.739448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.747414] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.747747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.747764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.756365] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.756697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.756713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.765005] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.765351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.765368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.771248] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.771564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.771581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.778924] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.779265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.779281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.787865] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.788239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.788255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.796082] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.796424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.796440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.805611] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.805823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.805844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.813083] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.813394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.813410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.821010] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.821339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.821356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.826473] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.826782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.826802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.836559] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.836917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.836935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.845074] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.845423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.845440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.201 [2024-04-26 15:02:41.852863] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.201 [2024-04-26 15:02:41.853213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.201 [2024-04-26 15:02:41.853229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.202 [2024-04-26 15:02:41.860374] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.202 [2024-04-26 15:02:41.860716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.202 [2024-04-26 15:02:41.860733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.463 [2024-04-26 15:02:41.867599] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.463 [2024-04-26 15:02:41.867821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.463 [2024-04-26 15:02:41.867844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.463 [2024-04-26 15:02:41.877473] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.463 [2024-04-26 15:02:41.877788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.463 [2024-04-26 15:02:41.877805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.463 [2024-04-26 15:02:41.885971] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.463 [2024-04-26 15:02:41.886295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.463 [2024-04-26 15:02:41.886311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.463 [2024-04-26 15:02:41.893980] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.463 [2024-04-26 15:02:41.894319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.463 [2024-04-26 15:02:41.894335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.463 [2024-04-26 15:02:41.901256] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.463 [2024-04-26 15:02:41.901475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.463 [2024-04-26 15:02:41.901491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.463 [2024-04-26 15:02:41.907382] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.463 [2024-04-26 15:02:41.907684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.463 [2024-04-26 15:02:41.907700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.463 [2024-04-26 15:02:41.912828] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.463 [2024-04-26 15:02:41.913055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.463 [2024-04-26 15:02:41.913071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.463 [2024-04-26 15:02:41.921117] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.463 [2024-04-26 15:02:41.921454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.463 [2024-04-26 15:02:41.921471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.463 [2024-04-26 15:02:41.927927] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.463 [2024-04-26 15:02:41.928140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.463 [2024-04-26 15:02:41.928156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.463 [2024-04-26 15:02:41.938982] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.463 [2024-04-26 15:02:41.939336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.463 [2024-04-26 15:02:41.939352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.463 [2024-04-26 15:02:41.951120] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.463 [2024-04-26 15:02:41.951525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.463 [2024-04-26 15:02:41.951541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.463 [2024-04-26 15:02:41.964958] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.463 [2024-04-26 15:02:41.965295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.463 [2024-04-26 15:02:41.965311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.463 [2024-04-26 15:02:41.976329] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.463 [2024-04-26 15:02:41.976670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.463 [2024-04-26 15:02:41.976686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.463 [2024-04-26 15:02:41.984997] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.464 [2024-04-26 15:02:41.985454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.464 [2024-04-26 15:02:41.985472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.464 [2024-04-26 15:02:41.993557] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.464 [2024-04-26 15:02:41.993916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.464 [2024-04-26 15:02:41.993932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.464 [2024-04-26 15:02:42.002426] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.464 [2024-04-26 15:02:42.002753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.464 [2024-04-26 15:02:42.002769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.464 [2024-04-26 15:02:42.009865] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.464 [2024-04-26 15:02:42.010306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.464 [2024-04-26 15:02:42.010323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.464 [2024-04-26 15:02:42.019844] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.464 [2024-04-26 15:02:42.020198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.464 [2024-04-26 15:02:42.020215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.464 [2024-04-26 15:02:42.027386] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.464 [2024-04-26 15:02:42.027702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.464 [2024-04-26 15:02:42.027718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.464 [2024-04-26 15:02:42.034941] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.464 [2024-04-26 15:02:42.035382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.464 [2024-04-26 15:02:42.035399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.464 [2024-04-26 15:02:42.043971] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.464 [2024-04-26 15:02:42.044298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.464 [2024-04-26 15:02:42.044314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.464 [2024-04-26 15:02:42.051629] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.464 [2024-04-26 15:02:42.051857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.464 [2024-04-26 15:02:42.051880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.464 [2024-04-26 15:02:42.060522] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.464 [2024-04-26 15:02:42.060862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.464 [2024-04-26 15:02:42.060879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.464 [2024-04-26 15:02:42.069845] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.464 [2024-04-26 15:02:42.070193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.464 [2024-04-26 15:02:42.070210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.464 [2024-04-26 15:02:42.078320] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.464 [2024-04-26 15:02:42.078662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.464 [2024-04-26 15:02:42.078678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.464 [2024-04-26 15:02:42.085107] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.464 [2024-04-26 15:02:42.085513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.464 [2024-04-26 15:02:42.085529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.464 [2024-04-26 15:02:42.091113] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.464 [2024-04-26 15:02:42.091439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.464 [2024-04-26 15:02:42.091455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.464 [2024-04-26 15:02:42.097854] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.464 [2024-04-26 15:02:42.098207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.464 [2024-04-26 15:02:42.098224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.464 [2024-04-26 15:02:42.106693] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.464 [2024-04-26 15:02:42.107030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.464 [2024-04-26 15:02:42.107046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.464 [2024-04-26 15:02:42.113453] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.464 [2024-04-26 15:02:42.113779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.464 [2024-04-26 15:02:42.113795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.464 [2024-04-26 15:02:42.121882] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.464 [2024-04-26 15:02:42.122206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.464 [2024-04-26 15:02:42.122223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.726 [2024-04-26 15:02:42.131578] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.726 [2024-04-26 15:02:42.131928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-04-26 15:02:42.131945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.726 [2024-04-26 15:02:42.139931] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.726 [2024-04-26 15:02:42.140152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-04-26 15:02:42.140168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.726 [2024-04-26 15:02:42.147248] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.726 [2024-04-26 15:02:42.147585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-04-26 15:02:42.147602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.726 [2024-04-26 15:02:42.152893] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.726 [2024-04-26 15:02:42.153221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-04-26 15:02:42.153237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.726 [2024-04-26 15:02:42.158719] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.726 [2024-04-26 15:02:42.159060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-04-26 15:02:42.159077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.726 [2024-04-26 15:02:42.164337] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.726 [2024-04-26 15:02:42.164661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-04-26 15:02:42.164677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.726 [2024-04-26 15:02:42.171776] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.726 [2024-04-26 15:02:42.172087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-04-26 15:02:42.172104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.726 [2024-04-26 15:02:42.178626] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.726 [2024-04-26 15:02:42.179000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-04-26 15:02:42.179017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.726 [2024-04-26 15:02:42.185085] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.726 [2024-04-26 15:02:42.185295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-04-26 15:02:42.185311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.726 [2024-04-26 15:02:42.190889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.726 [2024-04-26 15:02:42.191207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-04-26 15:02:42.191223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.726 [2024-04-26 15:02:42.198623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.726 [2024-04-26 15:02:42.198971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-04-26 15:02:42.198988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.726 [2024-04-26 15:02:42.205207] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.726 [2024-04-26 15:02:42.205522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-04-26 15:02:42.205539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.726 [2024-04-26 15:02:42.211329] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.726 [2024-04-26 15:02:42.211551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-04-26 15:02:42.211567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.727 [2024-04-26 15:02:42.220107] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.727 [2024-04-26 15:02:42.220431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-04-26 15:02:42.220448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.727 [2024-04-26 15:02:42.228778] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.727 [2024-04-26 15:02:42.228883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-04-26 15:02:42.228899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.727 [2024-04-26 15:02:42.238706] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.727 [2024-04-26 15:02:42.239154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-04-26 15:02:42.239172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.727 [2024-04-26 15:02:42.247161] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.727 [2024-04-26 15:02:42.247501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-04-26 15:02:42.247520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.727 [2024-04-26 15:02:42.256496] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.727 [2024-04-26 15:02:42.256709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-04-26 15:02:42.256725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.727 [2024-04-26 15:02:42.267783] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.727 [2024-04-26 15:02:42.268124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-04-26 15:02:42.268140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.727 [2024-04-26 15:02:42.279736] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.727 [2024-04-26 15:02:42.280062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-04-26 15:02:42.280079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.727 [2024-04-26 15:02:42.288221] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.727 [2024-04-26 15:02:42.288570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-04-26 15:02:42.288586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.727 [2024-04-26 15:02:42.294689] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.727 [2024-04-26 15:02:42.295050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-04-26 15:02:42.295067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.727 [2024-04-26 15:02:42.304355] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.727 [2024-04-26 15:02:42.304698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-04-26 15:02:42.304715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.727 [2024-04-26 15:02:42.311160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.727 [2024-04-26 15:02:42.311471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-04-26 15:02:42.311488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.727 [2024-04-26 15:02:42.319033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.727 [2024-04-26 15:02:42.319495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-04-26 15:02:42.319513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.727 [2024-04-26 15:02:42.324274] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.727 [2024-04-26 15:02:42.324613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-04-26 15:02:42.324630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.727 [2024-04-26 15:02:42.329426] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.727 [2024-04-26 15:02:42.329777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-04-26 15:02:42.329793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.727 [2024-04-26 15:02:42.335122] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.727 [2024-04-26 15:02:42.335570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-04-26 15:02:42.335587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.727 [2024-04-26 15:02:42.343045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.727 [2024-04-26 15:02:42.343355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-04-26 15:02:42.343372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.727 [2024-04-26 15:02:42.349925] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.727 [2024-04-26 15:02:42.350307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-04-26 15:02:42.350324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.727 [2024-04-26 15:02:42.357696] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.727 [2024-04-26 15:02:42.358026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-04-26 15:02:42.358043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.727 [2024-04-26 15:02:42.366201] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.727 [2024-04-26 15:02:42.366547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-04-26 15:02:42.366563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.727 [2024-04-26 15:02:42.375680] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.727 [2024-04-26 15:02:42.375769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-04-26 15:02:42.375784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.727 [2024-04-26 15:02:42.384169] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.727 [2024-04-26 15:02:42.384516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-04-26 15:02:42.384536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.990 [2024-04-26 15:02:42.391707] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.990 [2024-04-26 15:02:42.392024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.990 [2024-04-26 15:02:42.392041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.990 [2024-04-26 15:02:42.400279] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.990 [2024-04-26 15:02:42.400612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.990 [2024-04-26 15:02:42.400628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.990 [2024-04-26 15:02:42.409640] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.990 [2024-04-26 15:02:42.410076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.990 [2024-04-26 15:02:42.410093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.990 [2024-04-26 15:02:42.415630] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.990 [2024-04-26 15:02:42.415947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.990 [2024-04-26 15:02:42.415964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.990 [2024-04-26 15:02:42.422497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.990 [2024-04-26 15:02:42.422845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.990 [2024-04-26 15:02:42.422861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.990 [2024-04-26 15:02:42.428509] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.990 [2024-04-26 15:02:42.428844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.990 [2024-04-26 15:02:42.428860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.990 [2024-04-26 15:02:42.435198] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.990 [2024-04-26 15:02:42.435530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.990 [2024-04-26 15:02:42.435546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.990 [2024-04-26 15:02:42.439952] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.990 [2024-04-26 15:02:42.440162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.990 [2024-04-26 15:02:42.440178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.990 [2024-04-26 15:02:42.444917] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.990 [2024-04-26 15:02:42.445250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.990 [2024-04-26 15:02:42.445266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.990 [2024-04-26 15:02:42.450917] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.990 [2024-04-26 15:02:42.451237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.990 [2024-04-26 15:02:42.451254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.990 [2024-04-26 15:02:42.458182] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.990 [2024-04-26 15:02:42.458498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.990 [2024-04-26 15:02:42.458514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.990 [2024-04-26 15:02:42.465043] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.990 [2024-04-26 15:02:42.465383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.990 [2024-04-26 15:02:42.465399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.990 [2024-04-26 15:02:42.471083] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.990 [2024-04-26 15:02:42.471397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.990 [2024-04-26 15:02:42.471413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.990 [2024-04-26 15:02:42.479602] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.990 [2024-04-26 15:02:42.479955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.990 [2024-04-26 15:02:42.479973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.990 [2024-04-26 15:02:42.487858] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.990 [2024-04-26 15:02:42.488250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.990 [2024-04-26 15:02:42.488267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.990 [2024-04-26 15:02:42.494757] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.990 [2024-04-26 15:02:42.495097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.990 [2024-04-26 15:02:42.495114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.990 [2024-04-26 15:02:42.500721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.990 [2024-04-26 15:02:42.500813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.990 [2024-04-26 15:02:42.500827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.990 [2024-04-26 15:02:42.506223] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.990 [2024-04-26 15:02:42.506529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.990 [2024-04-26 15:02:42.506546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.990 [2024-04-26 15:02:42.513682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.990 [2024-04-26 15:02:42.514128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.990 [2024-04-26 15:02:42.514145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.990 [2024-04-26 15:02:42.522677] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.990 [2024-04-26 15:02:42.523008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.990 [2024-04-26 15:02:42.523025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.990 [2024-04-26 15:02:42.529814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.990 [2024-04-26 15:02:42.530198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.990 [2024-04-26 15:02:42.530214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.990 [2024-04-26 15:02:42.537703] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.990 [2024-04-26 15:02:42.538045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.990 [2024-04-26 15:02:42.538061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.990 [2024-04-26 15:02:42.544304] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.990 [2024-04-26 15:02:42.544639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.990 [2024-04-26 15:02:42.544656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.990 [2024-04-26 15:02:42.550770] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.990 [2024-04-26 15:02:42.551093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.990 [2024-04-26 15:02:42.551109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.990 [2024-04-26 15:02:42.556381] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.990 [2024-04-26 15:02:42.556722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.990 [2024-04-26 15:02:42.556739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.990 [2024-04-26 15:02:42.565184] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.990 [2024-04-26 15:02:42.565258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.990 [2024-04-26 15:02:42.565275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.990 [2024-04-26 15:02:42.574082] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.990 [2024-04-26 15:02:42.574354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.991 [2024-04-26 15:02:42.574371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.991 [2024-04-26 15:02:42.579313] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.991 [2024-04-26 15:02:42.579631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.991 [2024-04-26 15:02:42.579647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.991 [2024-04-26 15:02:42.584973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.991 [2024-04-26 15:02:42.585316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.991 [2024-04-26 15:02:42.585333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.991 [2024-04-26 15:02:42.591817] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.991 [2024-04-26 15:02:42.592092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.991 [2024-04-26 15:02:42.592109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.991 [2024-04-26 15:02:42.601856] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.991 [2024-04-26 15:02:42.601977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.991 [2024-04-26 15:02:42.601992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.991 [2024-04-26 15:02:42.608815] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.991 [2024-04-26 15:02:42.609205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.991 [2024-04-26 15:02:42.609222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.991 [2024-04-26 15:02:42.619104] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.991 [2024-04-26 15:02:42.619449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.991 [2024-04-26 15:02:42.619465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.991 [2024-04-26 15:02:42.630831] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.991 [2024-04-26 15:02:42.631176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.991 [2024-04-26 15:02:42.631193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.991 [2024-04-26 15:02:42.637687] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.991 [2024-04-26 15:02:42.638039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.991 [2024-04-26 15:02:42.638056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.991 [2024-04-26 15:02:42.642855] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.991 [2024-04-26 15:02:42.643210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.991 [2024-04-26 15:02:42.643227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.991 [2024-04-26 15:02:42.648099] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:25:59.991 [2024-04-26 15:02:42.648445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.991 [2024-04-26 15:02:42.648462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.254 [2024-04-26 15:02:42.655062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.254 [2024-04-26 15:02:42.655273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.254 [2024-04-26 15:02:42.655289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.254 [2024-04-26 15:02:42.661924] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.254 [2024-04-26 15:02:42.662318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.254 [2024-04-26 15:02:42.662335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.254 [2024-04-26 15:02:42.669609] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.254 [2024-04-26 15:02:42.669951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.254 [2024-04-26 15:02:42.669967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.254 [2024-04-26 15:02:42.676987] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.254 [2024-04-26 15:02:42.677300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.254 [2024-04-26 15:02:42.677316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.254 [2024-04-26 15:02:42.683682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.254 [2024-04-26 15:02:42.683898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.254 [2024-04-26 15:02:42.683914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.254 [2024-04-26 15:02:42.690288] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.254 [2024-04-26 15:02:42.690639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.254 [2024-04-26 15:02:42.690654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.254 [2024-04-26 15:02:42.696333] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.254 [2024-04-26 15:02:42.696770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.254 [2024-04-26 15:02:42.696788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.254 [2024-04-26 15:02:42.702435] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.254 [2024-04-26 15:02:42.702773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.254 [2024-04-26 15:02:42.702789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.254 [2024-04-26 15:02:42.707332] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.254 [2024-04-26 15:02:42.707678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.254 [2024-04-26 15:02:42.707694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.254 [2024-04-26 15:02:42.714102] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.254 [2024-04-26 15:02:42.714426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.254 [2024-04-26 15:02:42.714443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.254 [2024-04-26 15:02:42.719780] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.254 [2024-04-26 15:02:42.719997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.254 [2024-04-26 15:02:42.720014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.254 [2024-04-26 15:02:42.724518] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.254 [2024-04-26 15:02:42.724843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.254 [2024-04-26 15:02:42.724859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.255 [2024-04-26 15:02:42.729512] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.255 [2024-04-26 15:02:42.729817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.255 [2024-04-26 15:02:42.729834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.255 [2024-04-26 15:02:42.734506] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.255 [2024-04-26 15:02:42.734844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.255 [2024-04-26 15:02:42.734861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.255 [2024-04-26 15:02:42.740684] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.255 [2024-04-26 15:02:42.740895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.255 [2024-04-26 15:02:42.740914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.255 [2024-04-26 15:02:42.747909] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.255 [2024-04-26 15:02:42.748242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.255 [2024-04-26 15:02:42.748258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.255 [2024-04-26 15:02:42.755634] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.255 [2024-04-26 15:02:42.755883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.255 [2024-04-26 15:02:42.755899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.255 [2024-04-26 15:02:42.765241] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.255 [2024-04-26 15:02:42.765572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.255 [2024-04-26 15:02:42.765588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.255 [2024-04-26 15:02:42.773562] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.255 [2024-04-26 15:02:42.773773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.255 [2024-04-26 15:02:42.773789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.255 [2024-04-26 15:02:42.780721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.255 [2024-04-26 15:02:42.781057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.255 [2024-04-26 15:02:42.781074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.255 [2024-04-26 15:02:42.789362] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.255 [2024-04-26 15:02:42.789684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.255 [2024-04-26 15:02:42.789700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.255 [2024-04-26 15:02:42.795991] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.255 [2024-04-26 15:02:42.796212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.255 [2024-04-26 15:02:42.796228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.255 [2024-04-26 15:02:42.805447] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.255 [2024-04-26 15:02:42.805778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.255 [2024-04-26 15:02:42.805795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.255 [2024-04-26 15:02:42.814322] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.255 [2024-04-26 15:02:42.814649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.255 [2024-04-26 15:02:42.814665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.255 [2024-04-26 15:02:42.821438] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.255 [2024-04-26 15:02:42.821776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.255 [2024-04-26 15:02:42.821793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.255 [2024-04-26 15:02:42.828686] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.255 [2024-04-26 15:02:42.829019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.255 [2024-04-26 15:02:42.829036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.255 [2024-04-26 15:02:42.836388] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.255 [2024-04-26 15:02:42.836722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.255 [2024-04-26 15:02:42.836739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.255 [2024-04-26 15:02:42.841639] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.255 [2024-04-26 15:02:42.841968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.255 [2024-04-26 15:02:42.841985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.255 [2024-04-26 15:02:42.847121] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.255 [2024-04-26 15:02:42.847454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.255 [2024-04-26 15:02:42.847471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.255 [2024-04-26 15:02:42.851929] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.255 [2024-04-26 15:02:42.852138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.255 [2024-04-26 15:02:42.852153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.255 [2024-04-26 15:02:42.856724] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.255 [2024-04-26 15:02:42.857104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.255 [2024-04-26 15:02:42.857121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.255 [2024-04-26 15:02:42.863502] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.255 [2024-04-26 15:02:42.863822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.255 [2024-04-26 15:02:42.863843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.255 [2024-04-26 15:02:42.873625] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.255 [2024-04-26 15:02:42.873933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.255 [2024-04-26 15:02:42.873950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.255 [2024-04-26 15:02:42.881256] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.255 [2024-04-26 15:02:42.881584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.255 [2024-04-26 15:02:42.881601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.255 [2024-04-26 15:02:42.889070] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.255 [2024-04-26 15:02:42.889420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.255 [2024-04-26 15:02:42.889437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.255 [2024-04-26 15:02:42.895508] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.255 [2024-04-26 15:02:42.895820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.255 [2024-04-26 15:02:42.895843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.255 [2024-04-26 15:02:42.900817] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.255 [2024-04-26 15:02:42.901035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.255 [2024-04-26 15:02:42.901051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.255 [2024-04-26 15:02:42.905616] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.255 [2024-04-26 15:02:42.905922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.255 [2024-04-26 15:02:42.905937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.255 [2024-04-26 15:02:42.912457] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.255 [2024-04-26 15:02:42.912782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.255 [2024-04-26 15:02:42.912798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.256 [2024-04-26 15:02:42.917456] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.256 [2024-04-26 15:02:42.917781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.256 [2024-04-26 15:02:42.917798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.518 [2024-04-26 15:02:42.922459] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.518 [2024-04-26 15:02:42.922783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.518 [2024-04-26 15:02:42.922806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.518 [2024-04-26 15:02:42.927633] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.518 [2024-04-26 15:02:42.927846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.518 [2024-04-26 15:02:42.927862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.518 [2024-04-26 15:02:42.936402] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.518 [2024-04-26 15:02:42.936738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.518 [2024-04-26 15:02:42.936755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.518 [2024-04-26 15:02:42.944068] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.518 [2024-04-26 15:02:42.944387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.518 [2024-04-26 15:02:42.944404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.518 [2024-04-26 15:02:42.952300] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.518 [2024-04-26 15:02:42.952632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.518 [2024-04-26 15:02:42.952649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.518 [2024-04-26 15:02:42.957750] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.518 [2024-04-26 15:02:42.958098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.518 [2024-04-26 15:02:42.958116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.518 [2024-04-26 15:02:42.963356] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.518 [2024-04-26 15:02:42.963779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.518 [2024-04-26 15:02:42.963796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.518 [2024-04-26 15:02:42.969812] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.518 [2024-04-26 15:02:42.970152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.518 [2024-04-26 15:02:42.970169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.518 [2024-04-26 15:02:42.978031] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.518 [2024-04-26 15:02:42.978369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.518 [2024-04-26 15:02:42.978385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.518 [2024-04-26 15:02:42.985822] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.518 [2024-04-26 15:02:42.986146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.518 [2024-04-26 15:02:42.986163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.518 [2024-04-26 15:02:42.994166] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.518 [2024-04-26 15:02:42.994488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.518 [2024-04-26 15:02:42.994505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.518 [2024-04-26 15:02:43.002782] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.518 [2024-04-26 15:02:43.003015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.518 [2024-04-26 15:02:43.003032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.518 [2024-04-26 15:02:43.011119] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.518 [2024-04-26 15:02:43.011454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.518 [2024-04-26 15:02:43.011470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.518 [2024-04-26 15:02:43.017677] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.518 [2024-04-26 15:02:43.017896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.518 [2024-04-26 15:02:43.017911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.518 [2024-04-26 15:02:43.026275] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.518 [2024-04-26 15:02:43.026621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.518 [2024-04-26 15:02:43.026638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.518 [2024-04-26 15:02:43.031932] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.518 [2024-04-26 15:02:43.032267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.518 [2024-04-26 15:02:43.032283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.518 [2024-04-26 15:02:43.042862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.518 [2024-04-26 15:02:43.043188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.518 [2024-04-26 15:02:43.043204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.518 [2024-04-26 15:02:43.051700] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.518 [2024-04-26 15:02:43.051928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.518 [2024-04-26 15:02:43.051947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.518 [2024-04-26 15:02:43.062411] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.518 [2024-04-26 15:02:43.062765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.518 [2024-04-26 15:02:43.062782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.518 [2024-04-26 15:02:43.071657] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.518 [2024-04-26 15:02:43.071720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.518 [2024-04-26 15:02:43.071734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.518 [2024-04-26 15:02:43.082202] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.518 [2024-04-26 15:02:43.082547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.518 [2024-04-26 15:02:43.082564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.518 [2024-04-26 15:02:43.091093] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.518 [2024-04-26 15:02:43.091424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.518 [2024-04-26 15:02:43.091441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.518 [2024-04-26 15:02:43.099608] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.518 [2024-04-26 15:02:43.099945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.518 [2024-04-26 15:02:43.099962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.518 [2024-04-26 15:02:43.106546] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.518 [2024-04-26 15:02:43.106757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.519 [2024-04-26 15:02:43.106773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.519 [2024-04-26 15:02:43.113166] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.519 [2024-04-26 15:02:43.113504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.519 [2024-04-26 15:02:43.113520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.519 [2024-04-26 15:02:43.123758] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.519 [2024-04-26 15:02:43.124019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.519 [2024-04-26 15:02:43.124036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.519 [2024-04-26 15:02:43.133514] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.519 [2024-04-26 15:02:43.133941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.519 [2024-04-26 15:02:43.133958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.519 [2024-04-26 15:02:43.143072] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.519 [2024-04-26 15:02:43.143421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.519 [2024-04-26 15:02:43.143438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.519 [2024-04-26 15:02:43.150613] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.519 [2024-04-26 15:02:43.150947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.519 [2024-04-26 15:02:43.150965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.519 [2024-04-26 15:02:43.159359] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.519 [2024-04-26 15:02:43.159683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.519 [2024-04-26 15:02:43.159700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.519 [2024-04-26 15:02:43.164453] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.519 [2024-04-26 15:02:43.164774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.519 [2024-04-26 15:02:43.164790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.519 [2024-04-26 15:02:43.170414] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.519 [2024-04-26 15:02:43.170735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.519 [2024-04-26 15:02:43.170752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.519 [2024-04-26 15:02:43.177183] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.519 [2024-04-26 15:02:43.177515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.519 [2024-04-26 15:02:43.177532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.781 [2024-04-26 15:02:43.183137] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.781 [2024-04-26 15:02:43.183461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.781 [2024-04-26 15:02:43.183478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.781 [2024-04-26 15:02:43.192765] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.781 [2024-04-26 15:02:43.193108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.781 [2024-04-26 15:02:43.193125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.781 [2024-04-26 15:02:43.199992] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.781 [2024-04-26 15:02:43.200326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.781 [2024-04-26 15:02:43.200342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.781 [2024-04-26 15:02:43.207878] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.781 [2024-04-26 15:02:43.208288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.781 [2024-04-26 15:02:43.208304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.781 [2024-04-26 15:02:43.214967] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.781 [2024-04-26 15:02:43.215307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.781 [2024-04-26 15:02:43.215323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.781 [2024-04-26 15:02:43.220462] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.781 [2024-04-26 15:02:43.220897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.781 [2024-04-26 15:02:43.220914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.781 [2024-04-26 15:02:43.230553] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.781 [2024-04-26 15:02:43.230893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.781 [2024-04-26 15:02:43.230910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.781 [2024-04-26 15:02:43.236644] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.781 [2024-04-26 15:02:43.236861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.781 [2024-04-26 15:02:43.236877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.781 [2024-04-26 15:02:43.241758] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.781 [2024-04-26 15:02:43.242097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.781 [2024-04-26 15:02:43.242113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.781 [2024-04-26 15:02:43.249613] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.781 [2024-04-26 15:02:43.249946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.781 [2024-04-26 15:02:43.249963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.781 [2024-04-26 15:02:43.258699] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.781 [2024-04-26 15:02:43.259029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.781 [2024-04-26 15:02:43.259049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.781 [2024-04-26 15:02:43.267043] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.781 [2024-04-26 15:02:43.267255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.781 [2024-04-26 15:02:43.267271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.781 [2024-04-26 15:02:43.274348] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.781 [2024-04-26 15:02:43.274561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.781 [2024-04-26 15:02:43.274576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.781 [2024-04-26 15:02:43.281570] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.781 [2024-04-26 15:02:43.281903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.781 [2024-04-26 15:02:43.281920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.781 [2024-04-26 15:02:43.287072] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.781 [2024-04-26 15:02:43.287399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.781 [2024-04-26 15:02:43.287415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.781 [2024-04-26 15:02:43.296373] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.781 [2024-04-26 15:02:43.296811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.781 [2024-04-26 15:02:43.296827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.782 [2024-04-26 15:02:43.302708] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.782 [2024-04-26 15:02:43.303028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.782 [2024-04-26 15:02:43.303044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.782 [2024-04-26 15:02:43.310807] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.782 [2024-04-26 15:02:43.311153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.782 [2024-04-26 15:02:43.311170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.782 [2024-04-26 15:02:43.318830] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.782 [2024-04-26 15:02:43.319060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.782 [2024-04-26 15:02:43.319076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.782 [2024-04-26 15:02:43.327194] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x235c720) with pdu=0x2000190fef90 00:26:00.782 [2024-04-26 15:02:43.327584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.782 [2024-04-26 15:02:43.327600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.782 00:26:00.782 Latency(us) 00:26:00.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:00.782 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:00.782 nvme0n1 : 2.01 4043.56 505.45 0.00 0.00 3948.97 2075.31 12888.75 00:26:00.782 =================================================================================================================== 00:26:00.782 Total : 4043.56 505.45 0.00 0.00 3948.97 2075.31 12888.75 00:26:00.782 0 00:26:00.782 15:02:43 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:00.782 15:02:43 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:00.782 15:02:43 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:00.782 | .driver_specific 00:26:00.782 | .nvme_error 00:26:00.782 | .status_code 00:26:00.782 | .command_transient_transport_error' 00:26:00.782 15:02:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:01.043 15:02:43 -- host/digest.sh@71 -- # (( 261 > 0 )) 00:26:01.043 15:02:43 -- host/digest.sh@73 -- # killprocess 1217569 00:26:01.043 15:02:43 -- common/autotest_common.sh@936 -- # '[' -z 1217569 ']' 00:26:01.043 15:02:43 -- common/autotest_common.sh@940 -- # kill -0 1217569 00:26:01.043 15:02:43 -- common/autotest_common.sh@941 -- # uname 00:26:01.043 15:02:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:01.043 15:02:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1217569 00:26:01.043 15:02:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:01.043 15:02:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:01.043 15:02:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1217569' 00:26:01.043 killing process with pid 1217569 00:26:01.043 15:02:43 -- common/autotest_common.sh@955 -- # kill 1217569 00:26:01.043 Received shutdown signal, test time was about 2.000000 seconds 00:26:01.043 00:26:01.043 Latency(us) 00:26:01.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.043 =================================================================================================================== 00:26:01.043 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:01.043 15:02:43 -- common/autotest_common.sh@960 -- # wait 1217569 00:26:01.043 15:02:43 -- host/digest.sh@116 -- # killprocess 1215172 00:26:01.043 15:02:43 -- common/autotest_common.sh@936 -- # '[' -z 1215172 ']' 00:26:01.043 15:02:43 -- common/autotest_common.sh@940 -- # kill -0 1215172 00:26:01.043 15:02:43 -- common/autotest_common.sh@941 -- # uname 00:26:01.043 15:02:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:01.043 15:02:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1215172 00:26:01.303 15:02:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:01.303 15:02:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:01.303 15:02:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1215172' 00:26:01.303 killing process with pid 1215172 00:26:01.303 15:02:43 -- common/autotest_common.sh@955 -- # kill 1215172 00:26:01.303 15:02:43 -- common/autotest_common.sh@960 -- # wait 1215172 00:26:01.303 00:26:01.303 real 0m16.207s 00:26:01.303 user 0m31.876s 00:26:01.303 sys 0m3.377s 00:26:01.303 15:02:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:01.303 15:02:43 -- common/autotest_common.sh@10 -- # set +x 00:26:01.303 ************************************ 00:26:01.303 END TEST nvmf_digest_error 00:26:01.303 ************************************ 00:26:01.303 15:02:43 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:01.303 15:02:43 -- host/digest.sh@150 -- # nvmftestfini 00:26:01.303 15:02:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:01.303 15:02:43 -- nvmf/common.sh@117 -- # sync 00:26:01.303 15:02:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:01.303 15:02:43 -- nvmf/common.sh@120 -- # set +e 00:26:01.303 15:02:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:01.303 15:02:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:01.303 rmmod nvme_tcp 00:26:01.303 rmmod nvme_fabrics 00:26:01.303 rmmod nvme_keyring 00:26:01.565 15:02:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:01.565 15:02:43 -- nvmf/common.sh@124 -- # set -e 00:26:01.565 15:02:43 -- nvmf/common.sh@125 -- # return 0 00:26:01.565 15:02:43 -- nvmf/common.sh@478 -- # '[' -n 1215172 ']' 00:26:01.565 15:02:43 -- nvmf/common.sh@479 -- # killprocess 1215172 00:26:01.565 15:02:43 -- common/autotest_common.sh@936 -- # '[' -z 1215172 ']' 00:26:01.565 15:02:43 -- common/autotest_common.sh@940 -- # kill -0 1215172 00:26:01.565 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1215172) - No such process 00:26:01.565 15:02:43 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1215172 is not found' 00:26:01.565 Process with pid 1215172 is not found 00:26:01.565 15:02:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:01.565 15:02:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:01.565 15:02:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:01.565 15:02:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:01.565 15:02:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:01.565 15:02:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.565 15:02:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:01.565 15:02:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.480 15:02:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:03.480 00:26:03.480 real 0m42.484s 00:26:03.480 user 1m6.321s 00:26:03.480 sys 0m12.291s 00:26:03.480 15:02:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:03.480 15:02:46 -- common/autotest_common.sh@10 -- # set +x 00:26:03.480 ************************************ 00:26:03.480 END TEST nvmf_digest 00:26:03.480 ************************************ 00:26:03.480 15:02:46 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:26:03.480 15:02:46 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:26:03.480 15:02:46 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:26:03.480 15:02:46 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:03.480 15:02:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:03.480 15:02:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:03.480 15:02:46 -- common/autotest_common.sh@10 -- # set +x 00:26:03.741 ************************************ 00:26:03.741 START TEST nvmf_bdevperf 00:26:03.741 ************************************ 00:26:03.741 15:02:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:03.741 * Looking for test storage... 00:26:03.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:03.741 15:02:46 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:03.741 15:02:46 -- nvmf/common.sh@7 -- # uname -s 00:26:03.741 15:02:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:03.741 15:02:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:03.741 15:02:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:03.741 15:02:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:03.741 15:02:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:03.741 15:02:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:03.741 15:02:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:03.741 15:02:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:03.741 15:02:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:03.741 15:02:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:03.741 15:02:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:03.741 15:02:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:03.741 15:02:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:03.741 15:02:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:03.741 15:02:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:03.741 15:02:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:03.741 15:02:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:03.741 15:02:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:03.741 15:02:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:03.741 15:02:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:03.742 15:02:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.742 15:02:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.742 15:02:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.742 15:02:46 -- paths/export.sh@5 -- # export PATH 00:26:03.742 15:02:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.742 15:02:46 -- nvmf/common.sh@47 -- # : 0 00:26:03.742 15:02:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:03.742 15:02:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:03.742 15:02:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:03.742 15:02:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:03.742 15:02:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:03.742 15:02:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:03.742 15:02:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:03.742 15:02:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:04.003 15:02:46 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:04.003 15:02:46 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:04.003 15:02:46 -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:04.003 15:02:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:04.003 15:02:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:04.003 15:02:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:04.003 15:02:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:04.003 15:02:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:04.003 15:02:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.003 15:02:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:04.003 15:02:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.003 15:02:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:04.003 15:02:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:04.003 15:02:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:04.003 15:02:46 -- common/autotest_common.sh@10 -- # set +x 00:26:10.634 15:02:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:10.634 15:02:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:10.634 15:02:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:10.634 15:02:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:10.634 15:02:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:10.634 15:02:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:10.634 15:02:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:10.634 15:02:53 -- nvmf/common.sh@295 -- # net_devs=() 00:26:10.634 15:02:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:10.634 15:02:53 -- nvmf/common.sh@296 -- # e810=() 00:26:10.634 15:02:53 -- nvmf/common.sh@296 -- # local -ga e810 00:26:10.634 15:02:53 -- nvmf/common.sh@297 -- # x722=() 00:26:10.634 15:02:53 -- nvmf/common.sh@297 -- # local -ga x722 00:26:10.634 15:02:53 -- nvmf/common.sh@298 -- # mlx=() 00:26:10.634 15:02:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:10.634 15:02:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:10.634 15:02:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:10.634 15:02:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:10.634 15:02:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:10.634 15:02:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:10.634 15:02:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:10.634 15:02:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:10.634 15:02:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:10.634 15:02:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:10.634 15:02:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:10.634 15:02:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:10.634 15:02:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:10.634 15:02:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:10.634 15:02:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:10.634 15:02:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:10.634 15:02:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:10.634 15:02:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:10.634 15:02:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:10.634 15:02:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:10.634 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:10.634 15:02:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:10.634 15:02:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:10.634 15:02:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.634 15:02:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.634 15:02:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:10.634 15:02:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:10.634 15:02:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:10.634 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:10.634 15:02:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:10.634 15:02:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:10.634 15:02:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.634 15:02:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.634 15:02:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:10.634 15:02:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:10.634 15:02:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:10.634 15:02:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:10.634 15:02:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:10.634 15:02:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.634 15:02:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:10.634 15:02:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.634 15:02:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:10.634 Found net devices under 0000:31:00.0: cvl_0_0 00:26:10.634 15:02:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.634 15:02:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:10.634 15:02:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.634 15:02:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:10.634 15:02:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.634 15:02:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:10.634 Found net devices under 0000:31:00.1: cvl_0_1 00:26:10.634 15:02:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.634 15:02:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:10.634 15:02:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:10.634 15:02:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:10.634 15:02:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:10.634 15:02:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:10.634 15:02:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:10.634 15:02:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:10.634 15:02:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:10.634 15:02:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:10.634 15:02:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:10.634 15:02:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:10.634 15:02:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:10.634 15:02:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:10.634 15:02:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:10.634 15:02:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:10.634 15:02:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:10.634 15:02:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:10.634 15:02:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:10.898 15:02:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:10.898 15:02:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:10.898 15:02:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:10.898 15:02:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:10.898 15:02:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:10.898 15:02:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:10.898 15:02:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:10.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:10.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:26:10.898 00:26:10.898 --- 10.0.0.2 ping statistics --- 00:26:10.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.898 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:26:10.898 15:02:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:10.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:10.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.343 ms 00:26:10.898 00:26:10.898 --- 10.0.0.1 ping statistics --- 00:26:10.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.898 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:26:11.159 15:02:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:11.159 15:02:53 -- nvmf/common.sh@411 -- # return 0 00:26:11.159 15:02:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:11.159 15:02:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:11.159 15:02:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:11.159 15:02:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:11.159 15:02:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:11.159 15:02:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:11.159 15:02:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:11.159 15:02:53 -- host/bdevperf.sh@25 -- # tgt_init 00:26:11.159 15:02:53 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:11.159 15:02:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:11.159 15:02:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:11.159 15:02:53 -- common/autotest_common.sh@10 -- # set +x 00:26:11.159 15:02:53 -- nvmf/common.sh@470 -- # nvmfpid=1222657 00:26:11.159 15:02:53 -- nvmf/common.sh@471 -- # waitforlisten 1222657 00:26:11.159 15:02:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:11.159 15:02:53 -- common/autotest_common.sh@817 -- # '[' -z 1222657 ']' 00:26:11.159 15:02:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.159 15:02:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:11.159 15:02:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.159 15:02:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:11.159 15:02:53 -- common/autotest_common.sh@10 -- # set +x 00:26:11.160 [2024-04-26 15:02:53.666392] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:11.160 [2024-04-26 15:02:53.666459] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:11.160 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.160 [2024-04-26 15:02:53.753398] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:11.420 [2024-04-26 15:02:53.846373] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:11.420 [2024-04-26 15:02:53.846436] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:11.420 [2024-04-26 15:02:53.846445] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:11.420 [2024-04-26 15:02:53.846452] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:11.420 [2024-04-26 15:02:53.846459] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:11.420 [2024-04-26 15:02:53.846594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:11.420 [2024-04-26 15:02:53.846759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.420 [2024-04-26 15:02:53.846760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:12.022 15:02:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:12.022 15:02:54 -- common/autotest_common.sh@850 -- # return 0 00:26:12.022 15:02:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:12.022 15:02:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:12.023 15:02:54 -- common/autotest_common.sh@10 -- # set +x 00:26:12.023 15:02:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:12.023 15:02:54 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:12.023 15:02:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.023 15:02:54 -- common/autotest_common.sh@10 -- # set +x 00:26:12.023 [2024-04-26 15:02:54.484085] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:12.023 15:02:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.023 15:02:54 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:12.023 15:02:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.023 15:02:54 -- common/autotest_common.sh@10 -- # set +x 00:26:12.023 Malloc0 00:26:12.023 15:02:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.023 15:02:54 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:12.023 15:02:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.023 15:02:54 -- common/autotest_common.sh@10 -- # set +x 00:26:12.023 15:02:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.023 15:02:54 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:12.023 15:02:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.023 15:02:54 -- common/autotest_common.sh@10 -- # set +x 00:26:12.023 15:02:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.023 15:02:54 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:12.023 15:02:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.023 15:02:54 -- common/autotest_common.sh@10 -- # set +x 00:26:12.023 [2024-04-26 15:02:54.552483] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:12.023 15:02:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.023 15:02:54 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:12.023 15:02:54 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:12.023 15:02:54 -- nvmf/common.sh@521 -- # config=() 00:26:12.023 15:02:54 -- nvmf/common.sh@521 -- # local subsystem config 00:26:12.023 15:02:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:26:12.023 15:02:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:26:12.023 { 00:26:12.023 "params": { 00:26:12.023 "name": "Nvme$subsystem", 00:26:12.023 "trtype": "$TEST_TRANSPORT", 00:26:12.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:12.023 "adrfam": "ipv4", 00:26:12.023 "trsvcid": "$NVMF_PORT", 00:26:12.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:12.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:12.023 "hdgst": ${hdgst:-false}, 00:26:12.023 "ddgst": ${ddgst:-false} 00:26:12.023 }, 00:26:12.023 "method": "bdev_nvme_attach_controller" 00:26:12.023 } 00:26:12.023 EOF 00:26:12.023 )") 00:26:12.023 15:02:54 -- nvmf/common.sh@543 -- # cat 00:26:12.023 15:02:54 -- nvmf/common.sh@545 -- # jq . 00:26:12.023 15:02:54 -- nvmf/common.sh@546 -- # IFS=, 00:26:12.023 15:02:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:26:12.023 "params": { 00:26:12.023 "name": "Nvme1", 00:26:12.023 "trtype": "tcp", 00:26:12.023 "traddr": "10.0.0.2", 00:26:12.023 "adrfam": "ipv4", 00:26:12.023 "trsvcid": "4420", 00:26:12.023 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:12.023 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:12.023 "hdgst": false, 00:26:12.023 "ddgst": false 00:26:12.023 }, 00:26:12.023 "method": "bdev_nvme_attach_controller" 00:26:12.023 }' 00:26:12.023 [2024-04-26 15:02:54.604451] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:12.023 [2024-04-26 15:02:54.604504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1222744 ] 00:26:12.023 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.023 [2024-04-26 15:02:54.664066] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.283 [2024-04-26 15:02:54.726794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.542 Running I/O for 1 seconds... 00:26:13.484 00:26:13.484 Latency(us) 00:26:13.484 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.484 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:13.484 Verification LBA range: start 0x0 length 0x4000 00:26:13.484 Nvme1n1 : 1.00 8740.57 34.14 0.00 0.00 14582.91 3017.39 14745.60 00:26:13.484 =================================================================================================================== 00:26:13.484 Total : 8740.57 34.14 0.00 0.00 14582.91 3017.39 14745.60 00:26:13.745 15:02:56 -- host/bdevperf.sh@30 -- # bdevperfpid=1223042 00:26:13.745 15:02:56 -- host/bdevperf.sh@32 -- # sleep 3 00:26:13.745 15:02:56 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:13.745 15:02:56 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:13.745 15:02:56 -- nvmf/common.sh@521 -- # config=() 00:26:13.745 15:02:56 -- nvmf/common.sh@521 -- # local subsystem config 00:26:13.745 15:02:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:26:13.745 15:02:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:26:13.745 { 00:26:13.745 "params": { 00:26:13.745 "name": "Nvme$subsystem", 00:26:13.745 "trtype": "$TEST_TRANSPORT", 00:26:13.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.745 "adrfam": "ipv4", 00:26:13.745 "trsvcid": "$NVMF_PORT", 00:26:13.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.745 "hdgst": ${hdgst:-false}, 00:26:13.745 "ddgst": ${ddgst:-false} 00:26:13.745 }, 00:26:13.745 "method": "bdev_nvme_attach_controller" 00:26:13.745 } 00:26:13.745 EOF 00:26:13.745 )") 00:26:13.745 15:02:56 -- nvmf/common.sh@543 -- # cat 00:26:13.745 15:02:56 -- nvmf/common.sh@545 -- # jq . 00:26:13.745 15:02:56 -- nvmf/common.sh@546 -- # IFS=, 00:26:13.745 15:02:56 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:26:13.745 "params": { 00:26:13.745 "name": "Nvme1", 00:26:13.745 "trtype": "tcp", 00:26:13.745 "traddr": "10.0.0.2", 00:26:13.745 "adrfam": "ipv4", 00:26:13.745 "trsvcid": "4420", 00:26:13.745 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:13.745 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:13.745 "hdgst": false, 00:26:13.745 "ddgst": false 00:26:13.745 }, 00:26:13.745 "method": "bdev_nvme_attach_controller" 00:26:13.745 }' 00:26:13.745 [2024-04-26 15:02:56.223295] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:13.745 [2024-04-26 15:02:56.223353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1223042 ] 00:26:13.745 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.745 [2024-04-26 15:02:56.282558] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.745 [2024-04-26 15:02:56.345291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.006 Running I/O for 15 seconds... 00:26:16.556 15:02:59 -- host/bdevperf.sh@33 -- # kill -9 1222657 00:26:16.556 15:02:59 -- host/bdevperf.sh@35 -- # sleep 3 00:26:16.556 [2024-04-26 15:02:59.188842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.556 [2024-04-26 15:02:59.188884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.556 [2024-04-26 15:02:59.188911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.556 [2024-04-26 15:02:59.188922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.556 [2024-04-26 15:02:59.188932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.556 [2024-04-26 15:02:59.188942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.556 [2024-04-26 15:02:59.188951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.556 [2024-04-26 15:02:59.188959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.556 [2024-04-26 15:02:59.188969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.556 [2024-04-26 15:02:59.188977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.556 [2024-04-26 15:02:59.188987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.556 [2024-04-26 15:02:59.188996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.556 [2024-04-26 15:02:59.189007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.556 [2024-04-26 15:02:59.189014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.556 [2024-04-26 15:02:59.189024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.556 [2024-04-26 15:02:59.189032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.556 [2024-04-26 15:02:59.189041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.556 [2024-04-26 15:02:59.189049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.556 [2024-04-26 15:02:59.189059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.556 [2024-04-26 15:02:59.189068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.556 [2024-04-26 15:02:59.189078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.556 [2024-04-26 15:02:59.189087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.556 [2024-04-26 15:02:59.189098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.556 [2024-04-26 15:02:59.189107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.556 [2024-04-26 15:02:59.189119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.556 [2024-04-26 15:02:59.189126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.556 [2024-04-26 15:02:59.189139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.556 [2024-04-26 15:02:59.189153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.557 [2024-04-26 15:02:59.189195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.557 [2024-04-26 15:02:59.189215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.557 [2024-04-26 15:02:59.189231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.557 [2024-04-26 15:02:59.189247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.557 [2024-04-26 15:02:59.189263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.557 [2024-04-26 15:02:59.189280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.557 [2024-04-26 15:02:59.189296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.557 [2024-04-26 15:02:59.189793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.557 [2024-04-26 15:02:59.189800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.189809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.189816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.189825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.189832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.189845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.189852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.189861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.189868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.189877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.189884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.189893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.189900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.189909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.189917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.189926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.189933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.189942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.189949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.189958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.189964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.189974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.189981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.189991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.189998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.190014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.190030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.190046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.190062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.190078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.190094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.190110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.190126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.190142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.190158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.190174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.190191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.190208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.190224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.190240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.190256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.190272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.190288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.190303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.190319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.190335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.190352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.190369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.190385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.190403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.190420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.558 [2024-04-26 15:02:59.190436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.558 [2024-04-26 15:02:59.190445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.559 [2024-04-26 15:02:59.190452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:87544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.559 [2024-04-26 15:02:59.190468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.559 [2024-04-26 15:02:59.190485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.559 [2024-04-26 15:02:59.190500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.559 [2024-04-26 15:02:59.190517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.559 [2024-04-26 15:02:59.190533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.559 [2024-04-26 15:02:59.190549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.559 [2024-04-26 15:02:59.190566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.559 [2024-04-26 15:02:59.190583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.559 [2024-04-26 15:02:59.190598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.559 [2024-04-26 15:02:59.190617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.559 [2024-04-26 15:02:59.190634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.559 [2024-04-26 15:02:59.190650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.559 [2024-04-26 15:02:59.190667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.559 [2024-04-26 15:02:59.190683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.559 [2024-04-26 15:02:59.190699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.559 [2024-04-26 15:02:59.190715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.559 [2024-04-26 15:02:59.190732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.559 [2024-04-26 15:02:59.190747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.559 [2024-04-26 15:02:59.190764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.559 [2024-04-26 15:02:59.190780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.559 [2024-04-26 15:02:59.190796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.559 [2024-04-26 15:02:59.190813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.559 [2024-04-26 15:02:59.190829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.559 [2024-04-26 15:02:59.190927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:87600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.559 [2024-04-26 15:02:59.190943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.559 [2024-04-26 15:02:59.190959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.559 [2024-04-26 15:02:59.190976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.190985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.559 [2024-04-26 15:02:59.190992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.191001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.559 [2024-04-26 15:02:59.191008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.191017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.559 [2024-04-26 15:02:59.191025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.191034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.559 [2024-04-26 15:02:59.191041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.191050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.559 [2024-04-26 15:02:59.191057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.191066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.559 [2024-04-26 15:02:59.191073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.191082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.559 [2024-04-26 15:02:59.191089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.191099] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15187c0 is same with the state(5) to be set 00:26:16.559 [2024-04-26 15:02:59.191108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:16.559 [2024-04-26 15:02:59.191115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:16.559 [2024-04-26 15:02:59.191121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88504 len:8 PRP1 0x0 PRP2 0x0 00:26:16.559 [2024-04-26 15:02:59.191129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.559 [2024-04-26 15:02:59.191166] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15187c0 was disconnected and freed. reset controller. 00:26:16.559 [2024-04-26 15:02:59.194651] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:16.559 [2024-04-26 15:02:59.194696] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:16.559 [2024-04-26 15:02:59.195498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.559 [2024-04-26 15:02:59.195827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.559 [2024-04-26 15:02:59.195844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:16.560 [2024-04-26 15:02:59.195853] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:16.560 [2024-04-26 15:02:59.196070] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:16.560 [2024-04-26 15:02:59.196285] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:16.560 [2024-04-26 15:02:59.196293] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:16.560 [2024-04-26 15:02:59.196301] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:16.560 [2024-04-26 15:02:59.199778] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:16.560 [2024-04-26 15:02:59.208803] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:16.560 [2024-04-26 15:02:59.209377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.560 [2024-04-26 15:02:59.209710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.560 [2024-04-26 15:02:59.209721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:16.560 [2024-04-26 15:02:59.209729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:16.560 [2024-04-26 15:02:59.209951] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:16.560 [2024-04-26 15:02:59.210168] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:16.560 [2024-04-26 15:02:59.210176] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:16.560 [2024-04-26 15:02:59.210183] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:16.560 [2024-04-26 15:02:59.213656] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:16.824 [2024-04-26 15:02:59.222677] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:16.824 [2024-04-26 15:02:59.223261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.824 [2024-04-26 15:02:59.223576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.824 [2024-04-26 15:02:59.223585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:16.824 [2024-04-26 15:02:59.223597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:16.824 [2024-04-26 15:02:59.223812] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:16.824 [2024-04-26 15:02:59.224032] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:16.824 [2024-04-26 15:02:59.224041] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:16.824 [2024-04-26 15:02:59.224048] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:16.824 [2024-04-26 15:02:59.227533] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:16.824 [2024-04-26 15:02:59.236550] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:16.824 [2024-04-26 15:02:59.237193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.824 [2024-04-26 15:02:59.237505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.824 [2024-04-26 15:02:59.237518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:16.824 [2024-04-26 15:02:59.237528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:16.824 [2024-04-26 15:02:59.237762] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:16.824 [2024-04-26 15:02:59.237985] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:16.824 [2024-04-26 15:02:59.237994] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:16.824 [2024-04-26 15:02:59.238002] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:16.824 [2024-04-26 15:02:59.241476] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:16.824 [2024-04-26 15:02:59.250277] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:16.824 [2024-04-26 15:02:59.250916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.824 [2024-04-26 15:02:59.251325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.824 [2024-04-26 15:02:59.251338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:16.824 [2024-04-26 15:02:59.251348] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:16.824 [2024-04-26 15:02:59.251582] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:16.824 [2024-04-26 15:02:59.251799] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:16.824 [2024-04-26 15:02:59.251807] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:16.824 [2024-04-26 15:02:59.251815] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:16.824 [2024-04-26 15:02:59.255296] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:16.824 [2024-04-26 15:02:59.264103] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:16.824 [2024-04-26 15:02:59.264763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.824 [2024-04-26 15:02:59.265100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.824 [2024-04-26 15:02:59.265114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:16.824 [2024-04-26 15:02:59.265124] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:16.824 [2024-04-26 15:02:59.265361] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:16.824 [2024-04-26 15:02:59.265578] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:16.824 [2024-04-26 15:02:59.265587] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:16.824 [2024-04-26 15:02:59.265594] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:16.824 [2024-04-26 15:02:59.269075] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:16.824 [2024-04-26 15:02:59.277890] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:16.824 [2024-04-26 15:02:59.278543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.824 [2024-04-26 15:02:59.278879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.824 [2024-04-26 15:02:59.278893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:16.824 [2024-04-26 15:02:59.278903] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:16.824 [2024-04-26 15:02:59.279136] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:16.824 [2024-04-26 15:02:59.279354] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:16.824 [2024-04-26 15:02:59.279363] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:16.824 [2024-04-26 15:02:59.279370] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:16.824 [2024-04-26 15:02:59.282852] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:16.824 [2024-04-26 15:02:59.291652] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:16.824 [2024-04-26 15:02:59.292319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.824 [2024-04-26 15:02:59.292659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.824 [2024-04-26 15:02:59.292672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:16.824 [2024-04-26 15:02:59.292681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:16.824 [2024-04-26 15:02:59.292923] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:16.824 [2024-04-26 15:02:59.293142] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:16.824 [2024-04-26 15:02:59.293151] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:16.824 [2024-04-26 15:02:59.293159] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:16.824 [2024-04-26 15:02:59.296635] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:16.824 [2024-04-26 15:02:59.305434] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:16.824 [2024-04-26 15:02:59.305983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.824 [2024-04-26 15:02:59.306335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.824 [2024-04-26 15:02:59.306348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:16.824 [2024-04-26 15:02:59.306357] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:16.824 [2024-04-26 15:02:59.306591] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:16.824 [2024-04-26 15:02:59.306813] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:16.824 [2024-04-26 15:02:59.306821] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:16.824 [2024-04-26 15:02:59.306828] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:16.824 [2024-04-26 15:02:59.310310] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:16.824 [2024-04-26 15:02:59.319314] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:16.824 [2024-04-26 15:02:59.319929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.824 [2024-04-26 15:02:59.320327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.824 [2024-04-26 15:02:59.320340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:16.824 [2024-04-26 15:02:59.320349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:16.824 [2024-04-26 15:02:59.320583] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:16.824 [2024-04-26 15:02:59.320800] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:16.824 [2024-04-26 15:02:59.320808] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:16.824 [2024-04-26 15:02:59.320816] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:16.824 [2024-04-26 15:02:59.324303] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:16.824 [2024-04-26 15:02:59.333119] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:16.825 [2024-04-26 15:02:59.333738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.825 [2024-04-26 15:02:59.334086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.825 [2024-04-26 15:02:59.334100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:16.825 [2024-04-26 15:02:59.334109] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:16.825 [2024-04-26 15:02:59.334343] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:16.825 [2024-04-26 15:02:59.334560] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:16.825 [2024-04-26 15:02:59.334569] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:16.825 [2024-04-26 15:02:59.334576] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:16.825 [2024-04-26 15:02:59.338055] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:16.825 [2024-04-26 15:02:59.346857] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:16.825 [2024-04-26 15:02:59.347501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.825 [2024-04-26 15:02:59.347850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.825 [2024-04-26 15:02:59.347864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:16.825 [2024-04-26 15:02:59.347873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:16.825 [2024-04-26 15:02:59.348107] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:16.825 [2024-04-26 15:02:59.348324] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:16.825 [2024-04-26 15:02:59.348337] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:16.825 [2024-04-26 15:02:59.348344] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:16.825 [2024-04-26 15:02:59.351818] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:16.825 [2024-04-26 15:02:59.360639] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:16.825 [2024-04-26 15:02:59.361298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.825 [2024-04-26 15:02:59.361569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.825 [2024-04-26 15:02:59.361582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:16.825 [2024-04-26 15:02:59.361592] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:16.825 [2024-04-26 15:02:59.361826] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:16.825 [2024-04-26 15:02:59.362052] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:16.825 [2024-04-26 15:02:59.362061] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:16.825 [2024-04-26 15:02:59.362069] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:16.825 [2024-04-26 15:02:59.365540] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:16.825 [2024-04-26 15:02:59.374542] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:16.825 [2024-04-26 15:02:59.375202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.825 [2024-04-26 15:02:59.375537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.825 [2024-04-26 15:02:59.375550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:16.825 [2024-04-26 15:02:59.375560] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:16.825 [2024-04-26 15:02:59.375794] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:16.825 [2024-04-26 15:02:59.376019] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:16.825 [2024-04-26 15:02:59.376029] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:16.825 [2024-04-26 15:02:59.376036] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:16.825 [2024-04-26 15:02:59.379507] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:16.825 [2024-04-26 15:02:59.388304] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:16.825 [2024-04-26 15:02:59.388818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.825 [2024-04-26 15:02:59.389164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.825 [2024-04-26 15:02:59.389177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:16.825 [2024-04-26 15:02:59.389186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:16.825 [2024-04-26 15:02:59.389420] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:16.825 [2024-04-26 15:02:59.389637] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:16.825 [2024-04-26 15:02:59.389645] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:16.825 [2024-04-26 15:02:59.389660] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:16.825 [2024-04-26 15:02:59.393142] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:16.825 [2024-04-26 15:02:59.402161] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:16.825 [2024-04-26 15:02:59.402827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.825 [2024-04-26 15:02:59.403240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.825 [2024-04-26 15:02:59.403253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:16.825 [2024-04-26 15:02:59.403263] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:16.825 [2024-04-26 15:02:59.403496] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:16.825 [2024-04-26 15:02:59.403714] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:16.825 [2024-04-26 15:02:59.403722] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:16.825 [2024-04-26 15:02:59.403729] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:16.825 [2024-04-26 15:02:59.407209] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:16.825 [2024-04-26 15:02:59.416012] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:16.825 [2024-04-26 15:02:59.416666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.825 [2024-04-26 15:02:59.417007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.825 [2024-04-26 15:02:59.417021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:16.825 [2024-04-26 15:02:59.417031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:16.825 [2024-04-26 15:02:59.417264] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:16.825 [2024-04-26 15:02:59.417481] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:16.825 [2024-04-26 15:02:59.417489] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:16.825 [2024-04-26 15:02:59.417497] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:16.825 [2024-04-26 15:02:59.420976] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:16.825 [2024-04-26 15:02:59.429789] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:16.825 [2024-04-26 15:02:59.430444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.825 [2024-04-26 15:02:59.430786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.825 [2024-04-26 15:02:59.430799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:16.825 [2024-04-26 15:02:59.430809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:16.825 [2024-04-26 15:02:59.431051] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:16.825 [2024-04-26 15:02:59.431269] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:16.825 [2024-04-26 15:02:59.431277] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:16.825 [2024-04-26 15:02:59.431285] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:16.825 [2024-04-26 15:02:59.434764] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:16.825 [2024-04-26 15:02:59.443558] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:16.825 [2024-04-26 15:02:59.444024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.825 [2024-04-26 15:02:59.444424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.825 [2024-04-26 15:02:59.444437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:16.825 [2024-04-26 15:02:59.444447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:16.825 [2024-04-26 15:02:59.444680] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:16.825 [2024-04-26 15:02:59.444906] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:16.825 [2024-04-26 15:02:59.444916] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:16.825 [2024-04-26 15:02:59.444923] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:16.825 [2024-04-26 15:02:59.448397] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:16.825 [2024-04-26 15:02:59.457410] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:16.825 [2024-04-26 15:02:59.458153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.825 [2024-04-26 15:02:59.458409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.825 [2024-04-26 15:02:59.458423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:16.825 [2024-04-26 15:02:59.458433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:16.826 [2024-04-26 15:02:59.458667] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:16.826 [2024-04-26 15:02:59.458892] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:16.826 [2024-04-26 15:02:59.458901] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:16.826 [2024-04-26 15:02:59.458908] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:16.826 [2024-04-26 15:02:59.462385] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:16.826 [2024-04-26 15:02:59.471188] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:16.826 [2024-04-26 15:02:59.471751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.826 [2024-04-26 15:02:59.472038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.826 [2024-04-26 15:02:59.472052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:16.826 [2024-04-26 15:02:59.472062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:16.826 [2024-04-26 15:02:59.472295] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:16.826 [2024-04-26 15:02:59.472513] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:16.826 [2024-04-26 15:02:59.472521] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:16.826 [2024-04-26 15:02:59.472528] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:16.826 [2024-04-26 15:02:59.476005] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:16.826 [2024-04-26 15:02:59.485013] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:16.826 [2024-04-26 15:02:59.485685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.826 [2024-04-26 15:02:59.485928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.826 [2024-04-26 15:02:59.485944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:16.826 [2024-04-26 15:02:59.485954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:16.826 [2024-04-26 15:02:59.486187] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:16.826 [2024-04-26 15:02:59.486405] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:16.826 [2024-04-26 15:02:59.486414] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:16.826 [2024-04-26 15:02:59.486421] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.089 [2024-04-26 15:02:59.489899] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.089 [2024-04-26 15:02:59.498906] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.089 [2024-04-26 15:02:59.499466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.089 [2024-04-26 15:02:59.499844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.089 [2024-04-26 15:02:59.499859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.089 [2024-04-26 15:02:59.499868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.089 [2024-04-26 15:02:59.500102] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.089 [2024-04-26 15:02:59.500319] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.089 [2024-04-26 15:02:59.500327] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.089 [2024-04-26 15:02:59.500335] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.089 [2024-04-26 15:02:59.503810] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.089 [2024-04-26 15:02:59.512614] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.089 [2024-04-26 15:02:59.513274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.089 [2024-04-26 15:02:59.513675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.089 [2024-04-26 15:02:59.513688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.089 [2024-04-26 15:02:59.513698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.089 [2024-04-26 15:02:59.513939] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.089 [2024-04-26 15:02:59.514157] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.089 [2024-04-26 15:02:59.514165] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.089 [2024-04-26 15:02:59.514172] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.089 [2024-04-26 15:02:59.517648] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.089 [2024-04-26 15:02:59.526450] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.089 [2024-04-26 15:02:59.527114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.089 [2024-04-26 15:02:59.527451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.089 [2024-04-26 15:02:59.527465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.089 [2024-04-26 15:02:59.527474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.089 [2024-04-26 15:02:59.527708] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.089 [2024-04-26 15:02:59.527945] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.089 [2024-04-26 15:02:59.527954] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.089 [2024-04-26 15:02:59.527962] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.089 [2024-04-26 15:02:59.531435] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.089 [2024-04-26 15:02:59.540242] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.089 [2024-04-26 15:02:59.540912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.089 [2024-04-26 15:02:59.541252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.089 [2024-04-26 15:02:59.541265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.089 [2024-04-26 15:02:59.541275] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.089 [2024-04-26 15:02:59.541509] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.089 [2024-04-26 15:02:59.541726] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.089 [2024-04-26 15:02:59.541735] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.089 [2024-04-26 15:02:59.541742] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.089 [2024-04-26 15:02:59.545224] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.089 [2024-04-26 15:02:59.554035] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.089 [2024-04-26 15:02:59.554699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.089 [2024-04-26 15:02:59.555053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.089 [2024-04-26 15:02:59.555068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.089 [2024-04-26 15:02:59.555078] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.089 [2024-04-26 15:02:59.555311] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.089 [2024-04-26 15:02:59.555529] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.089 [2024-04-26 15:02:59.555538] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.089 [2024-04-26 15:02:59.555545] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.089 [2024-04-26 15:02:59.559030] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.089 [2024-04-26 15:02:59.567833] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.089 [2024-04-26 15:02:59.568501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.089 [2024-04-26 15:02:59.568834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.089 [2024-04-26 15:02:59.568860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.089 [2024-04-26 15:02:59.568870] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.089 [2024-04-26 15:02:59.569104] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.089 [2024-04-26 15:02:59.569321] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.089 [2024-04-26 15:02:59.569329] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.089 [2024-04-26 15:02:59.569336] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.089 [2024-04-26 15:02:59.572808] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.089 [2024-04-26 15:02:59.581605] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.089 [2024-04-26 15:02:59.582261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.089 [2024-04-26 15:02:59.582531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.089 [2024-04-26 15:02:59.582543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.089 [2024-04-26 15:02:59.582552] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.089 [2024-04-26 15:02:59.582787] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.089 [2024-04-26 15:02:59.583013] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.089 [2024-04-26 15:02:59.583022] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.089 [2024-04-26 15:02:59.583029] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.089 [2024-04-26 15:02:59.586505] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.089 [2024-04-26 15:02:59.595509] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.089 [2024-04-26 15:02:59.596080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.089 [2024-04-26 15:02:59.596414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.089 [2024-04-26 15:02:59.596427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.089 [2024-04-26 15:02:59.596436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.089 [2024-04-26 15:02:59.596671] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.089 [2024-04-26 15:02:59.596894] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.089 [2024-04-26 15:02:59.596903] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.089 [2024-04-26 15:02:59.596911] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.090 [2024-04-26 15:02:59.600385] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.090 [2024-04-26 15:02:59.609409] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.090 [2024-04-26 15:02:59.609966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.090 [2024-04-26 15:02:59.610355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.090 [2024-04-26 15:02:59.610368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.090 [2024-04-26 15:02:59.610382] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.090 [2024-04-26 15:02:59.610616] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.090 [2024-04-26 15:02:59.610833] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.090 [2024-04-26 15:02:59.610852] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.090 [2024-04-26 15:02:59.610859] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.090 [2024-04-26 15:02:59.614332] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.090 [2024-04-26 15:02:59.623137] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.090 [2024-04-26 15:02:59.623801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.090 [2024-04-26 15:02:59.624094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.090 [2024-04-26 15:02:59.624108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.090 [2024-04-26 15:02:59.624118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.090 [2024-04-26 15:02:59.624352] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.090 [2024-04-26 15:02:59.624570] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.090 [2024-04-26 15:02:59.624578] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.090 [2024-04-26 15:02:59.624586] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.090 [2024-04-26 15:02:59.628075] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.090 [2024-04-26 15:02:59.636899] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.090 [2024-04-26 15:02:59.637448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.090 [2024-04-26 15:02:59.637749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.090 [2024-04-26 15:02:59.637759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.090 [2024-04-26 15:02:59.637767] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.090 [2024-04-26 15:02:59.637989] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.090 [2024-04-26 15:02:59.638204] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.090 [2024-04-26 15:02:59.638213] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.090 [2024-04-26 15:02:59.638220] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.090 [2024-04-26 15:02:59.641693] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.090 [2024-04-26 15:02:59.650713] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.090 [2024-04-26 15:02:59.651389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.090 [2024-04-26 15:02:59.651722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.090 [2024-04-26 15:02:59.651734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.090 [2024-04-26 15:02:59.651744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.090 [2024-04-26 15:02:59.651991] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.090 [2024-04-26 15:02:59.652210] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.090 [2024-04-26 15:02:59.652218] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.090 [2024-04-26 15:02:59.652225] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.090 [2024-04-26 15:02:59.655703] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.090 [2024-04-26 15:02:59.664531] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.090 [2024-04-26 15:02:59.665197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.090 [2024-04-26 15:02:59.665536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.090 [2024-04-26 15:02:59.665549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.090 [2024-04-26 15:02:59.665559] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.090 [2024-04-26 15:02:59.665793] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.090 [2024-04-26 15:02:59.666017] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.090 [2024-04-26 15:02:59.666026] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.090 [2024-04-26 15:02:59.666034] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.090 [2024-04-26 15:02:59.669506] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.090 [2024-04-26 15:02:59.678311] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.090 [2024-04-26 15:02:59.678855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.090 [2024-04-26 15:02:59.679061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.090 [2024-04-26 15:02:59.679074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.090 [2024-04-26 15:02:59.679082] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.090 [2024-04-26 15:02:59.679298] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.090 [2024-04-26 15:02:59.679513] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.090 [2024-04-26 15:02:59.679521] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.090 [2024-04-26 15:02:59.679527] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.090 [2024-04-26 15:02:59.683002] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.090 [2024-04-26 15:02:59.692218] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.090 [2024-04-26 15:02:59.692870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.090 [2024-04-26 15:02:59.693209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.090 [2024-04-26 15:02:59.693221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.090 [2024-04-26 15:02:59.693231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.090 [2024-04-26 15:02:59.693464] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.090 [2024-04-26 15:02:59.693687] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.090 [2024-04-26 15:02:59.693695] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.090 [2024-04-26 15:02:59.693702] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.090 [2024-04-26 15:02:59.697183] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.090 [2024-04-26 15:02:59.705989] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.090 [2024-04-26 15:02:59.706609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.090 [2024-04-26 15:02:59.706878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.090 [2024-04-26 15:02:59.706893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.090 [2024-04-26 15:02:59.706902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.090 [2024-04-26 15:02:59.707137] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.090 [2024-04-26 15:02:59.707355] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.090 [2024-04-26 15:02:59.707363] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.090 [2024-04-26 15:02:59.707370] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.090 [2024-04-26 15:02:59.710845] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.090 [2024-04-26 15:02:59.719851] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.090 [2024-04-26 15:02:59.720368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.090 [2024-04-26 15:02:59.720695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.090 [2024-04-26 15:02:59.720705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.090 [2024-04-26 15:02:59.720712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.090 [2024-04-26 15:02:59.720935] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.090 [2024-04-26 15:02:59.721150] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.090 [2024-04-26 15:02:59.721158] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.090 [2024-04-26 15:02:59.721165] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.090 [2024-04-26 15:02:59.724652] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.090 [2024-04-26 15:02:59.733682] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.090 [2024-04-26 15:02:59.734333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.091 [2024-04-26 15:02:59.734688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.091 [2024-04-26 15:02:59.734702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.091 [2024-04-26 15:02:59.734711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.091 [2024-04-26 15:02:59.734957] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.091 [2024-04-26 15:02:59.735176] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.091 [2024-04-26 15:02:59.735189] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.091 [2024-04-26 15:02:59.735196] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.091 [2024-04-26 15:02:59.738673] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.091 [2024-04-26 15:02:59.747493] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.091 [2024-04-26 15:02:59.748187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.091 [2024-04-26 15:02:59.748594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.091 [2024-04-26 15:02:59.748606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.091 [2024-04-26 15:02:59.748616] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.091 [2024-04-26 15:02:59.748856] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.091 [2024-04-26 15:02:59.749075] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.091 [2024-04-26 15:02:59.749083] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.091 [2024-04-26 15:02:59.749090] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.091 [2024-04-26 15:02:59.752562] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.351 [2024-04-26 15:02:59.761375] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.351 [2024-04-26 15:02:59.761959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:02:59.762266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:02:59.762276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.351 [2024-04-26 15:02:59.762284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.351 [2024-04-26 15:02:59.762499] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.351 [2024-04-26 15:02:59.762714] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.351 [2024-04-26 15:02:59.762722] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.351 [2024-04-26 15:02:59.762729] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.351 [2024-04-26 15:02:59.766205] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.351 [2024-04-26 15:02:59.775208] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.351 [2024-04-26 15:02:59.775783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:02:59.776083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:02:59.776095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.351 [2024-04-26 15:02:59.776102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.351 [2024-04-26 15:02:59.776317] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.351 [2024-04-26 15:02:59.776531] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.351 [2024-04-26 15:02:59.776538] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.351 [2024-04-26 15:02:59.776550] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.351 [2024-04-26 15:02:59.780026] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.351 [2024-04-26 15:02:59.789045] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.351 [2024-04-26 15:02:59.789573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:02:59.789934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:02:59.789945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.351 [2024-04-26 15:02:59.789952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.351 [2024-04-26 15:02:59.790167] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.351 [2024-04-26 15:02:59.790381] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.351 [2024-04-26 15:02:59.790389] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.351 [2024-04-26 15:02:59.790396] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.351 [2024-04-26 15:02:59.793874] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.351 [2024-04-26 15:02:59.802893] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.351 [2024-04-26 15:02:59.803467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:02:59.803694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.351 [2024-04-26 15:02:59.803710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.351 [2024-04-26 15:02:59.803718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.351 [2024-04-26 15:02:59.803938] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.351 [2024-04-26 15:02:59.804153] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.352 [2024-04-26 15:02:59.804160] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.352 [2024-04-26 15:02:59.804167] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.352 [2024-04-26 15:02:59.807640] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.352 [2024-04-26 15:02:59.816652] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.352 [2024-04-26 15:02:59.817187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:02:59.817511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:02:59.817520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.352 [2024-04-26 15:02:59.817527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.352 [2024-04-26 15:02:59.817741] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.352 [2024-04-26 15:02:59.817961] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.352 [2024-04-26 15:02:59.817969] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.352 [2024-04-26 15:02:59.817976] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.352 [2024-04-26 15:02:59.821456] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.352 [2024-04-26 15:02:59.830479] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.352 [2024-04-26 15:02:59.830992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:02:59.831293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:02:59.831303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.352 [2024-04-26 15:02:59.831311] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.352 [2024-04-26 15:02:59.831526] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.352 [2024-04-26 15:02:59.831739] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.352 [2024-04-26 15:02:59.831747] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.352 [2024-04-26 15:02:59.831754] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.352 [2024-04-26 15:02:59.835233] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.352 [2024-04-26 15:02:59.844252] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.352 [2024-04-26 15:02:59.844786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:02:59.845078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:02:59.845089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.352 [2024-04-26 15:02:59.845096] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.352 [2024-04-26 15:02:59.845311] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.352 [2024-04-26 15:02:59.845524] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.352 [2024-04-26 15:02:59.845533] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.352 [2024-04-26 15:02:59.845540] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.352 [2024-04-26 15:02:59.849101] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.352 [2024-04-26 15:02:59.858130] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.352 [2024-04-26 15:02:59.858662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:02:59.858971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:02:59.858982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.352 [2024-04-26 15:02:59.858989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.352 [2024-04-26 15:02:59.859204] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.352 [2024-04-26 15:02:59.859418] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.352 [2024-04-26 15:02:59.859425] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.352 [2024-04-26 15:02:59.859432] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.352 [2024-04-26 15:02:59.862913] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.352 [2024-04-26 15:02:59.871933] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.352 [2024-04-26 15:02:59.872464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:02:59.872821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:02:59.872830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.352 [2024-04-26 15:02:59.872843] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.352 [2024-04-26 15:02:59.873058] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.352 [2024-04-26 15:02:59.873272] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.352 [2024-04-26 15:02:59.873280] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.352 [2024-04-26 15:02:59.873287] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.352 [2024-04-26 15:02:59.876759] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.352 [2024-04-26 15:02:59.886006] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.352 [2024-04-26 15:02:59.886540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:02:59.886897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:02:59.886907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.352 [2024-04-26 15:02:59.886915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.352 [2024-04-26 15:02:59.887131] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.352 [2024-04-26 15:02:59.887345] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.352 [2024-04-26 15:02:59.887353] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.352 [2024-04-26 15:02:59.887360] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.352 [2024-04-26 15:02:59.890844] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.352 [2024-04-26 15:02:59.899863] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.352 [2024-04-26 15:02:59.900392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:02:59.900747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:02:59.900757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.352 [2024-04-26 15:02:59.900765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.352 [2024-04-26 15:02:59.900987] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.352 [2024-04-26 15:02:59.901201] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.352 [2024-04-26 15:02:59.901209] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.352 [2024-04-26 15:02:59.901216] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.352 [2024-04-26 15:02:59.904692] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.352 [2024-04-26 15:02:59.913708] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.352 [2024-04-26 15:02:59.914337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:02:59.914710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:02:59.914723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.352 [2024-04-26 15:02:59.914733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.352 [2024-04-26 15:02:59.914974] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.352 [2024-04-26 15:02:59.915192] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.352 [2024-04-26 15:02:59.915200] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.352 [2024-04-26 15:02:59.915207] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.352 [2024-04-26 15:02:59.918681] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.352 [2024-04-26 15:02:59.927482] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.352 [2024-04-26 15:02:59.927971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:02:59.928360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-26 15:02:59.928372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.352 [2024-04-26 15:02:59.928382] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.352 [2024-04-26 15:02:59.928615] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.352 [2024-04-26 15:02:59.928833] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.352 [2024-04-26 15:02:59.928850] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.352 [2024-04-26 15:02:59.928857] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.352 [2024-04-26 15:02:59.932343] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.353 [2024-04-26 15:02:59.941352] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.353 [2024-04-26 15:02:59.941793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:02:59.942095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:02:59.942107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.353 [2024-04-26 15:02:59.942115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.353 [2024-04-26 15:02:59.942331] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.353 [2024-04-26 15:02:59.942546] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.353 [2024-04-26 15:02:59.942553] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.353 [2024-04-26 15:02:59.942560] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.353 [2024-04-26 15:02:59.946040] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.353 [2024-04-26 15:02:59.955265] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.353 [2024-04-26 15:02:59.955719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:02:59.956051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:02:59.956066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.353 [2024-04-26 15:02:59.956073] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.353 [2024-04-26 15:02:59.956289] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.353 [2024-04-26 15:02:59.956507] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.353 [2024-04-26 15:02:59.956517] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.353 [2024-04-26 15:02:59.956523] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.353 [2024-04-26 15:02:59.960004] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.353 [2024-04-26 15:02:59.969022] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.353 [2024-04-26 15:02:59.969664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:02:59.970019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:02:59.970033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.353 [2024-04-26 15:02:59.970042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.353 [2024-04-26 15:02:59.970276] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.353 [2024-04-26 15:02:59.970493] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.353 [2024-04-26 15:02:59.970501] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.353 [2024-04-26 15:02:59.970508] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.353 [2024-04-26 15:02:59.973994] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.353 [2024-04-26 15:02:59.982813] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.353 [2024-04-26 15:02:59.983398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:02:59.983739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:02:59.983749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.353 [2024-04-26 15:02:59.983757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.353 [2024-04-26 15:02:59.983977] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.353 [2024-04-26 15:02:59.984192] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.353 [2024-04-26 15:02:59.984199] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.353 [2024-04-26 15:02:59.984206] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.353 [2024-04-26 15:02:59.987681] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.353 [2024-04-26 15:02:59.996694] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.353 [2024-04-26 15:02:59.997276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:02:59.997585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:02:59.997594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.353 [2024-04-26 15:02:59.997606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.353 [2024-04-26 15:02:59.997821] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.353 [2024-04-26 15:02:59.998041] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.353 [2024-04-26 15:02:59.998050] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.353 [2024-04-26 15:02:59.998056] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.353 [2024-04-26 15:03:00.001527] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.353 [2024-04-26 15:03:00.010422] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.353 [2024-04-26 15:03:00.010963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:03:00.011306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-26 15:03:00.011319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.353 [2024-04-26 15:03:00.011329] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.353 [2024-04-26 15:03:00.011563] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.353 [2024-04-26 15:03:00.011780] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.353 [2024-04-26 15:03:00.011789] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.353 [2024-04-26 15:03:00.011796] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.353 [2024-04-26 15:03:00.015277] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.616 [2024-04-26 15:03:00.024284] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.616 [2024-04-26 15:03:00.024863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 15:03:00.025258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 15:03:00.025268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.616 [2024-04-26 15:03:00.025276] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.616 [2024-04-26 15:03:00.025492] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.616 [2024-04-26 15:03:00.025706] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.616 [2024-04-26 15:03:00.025714] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.616 [2024-04-26 15:03:00.025721] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.616 [2024-04-26 15:03:00.029209] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.616 [2024-04-26 15:03:00.038024] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.616 [2024-04-26 15:03:00.038476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 15:03:00.038794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 15:03:00.038805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.616 [2024-04-26 15:03:00.038813] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.616 [2024-04-26 15:03:00.039043] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.616 [2024-04-26 15:03:00.039258] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.616 [2024-04-26 15:03:00.039266] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.616 [2024-04-26 15:03:00.039273] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.616 [2024-04-26 15:03:00.042747] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.616 [2024-04-26 15:03:00.051769] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.616 [2024-04-26 15:03:00.052379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 15:03:00.052717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 15:03:00.052730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.616 [2024-04-26 15:03:00.052740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.616 [2024-04-26 15:03:00.052982] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.616 [2024-04-26 15:03:00.053200] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.616 [2024-04-26 15:03:00.053208] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.616 [2024-04-26 15:03:00.053216] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.616 [2024-04-26 15:03:00.056697] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.616 [2024-04-26 15:03:00.065524] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.616 [2024-04-26 15:03:00.066078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 15:03:00.066288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.616 [2024-04-26 15:03:00.066302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.616 [2024-04-26 15:03:00.066310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.616 [2024-04-26 15:03:00.066526] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.616 [2024-04-26 15:03:00.066742] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.616 [2024-04-26 15:03:00.066749] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.616 [2024-04-26 15:03:00.066756] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.616 [2024-04-26 15:03:00.070241] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.617 [2024-04-26 15:03:00.079257] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.617 [2024-04-26 15:03:00.079814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 15:03:00.080200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 15:03:00.080211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.617 [2024-04-26 15:03:00.080219] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.617 [2024-04-26 15:03:00.080434] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.617 [2024-04-26 15:03:00.080653] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.617 [2024-04-26 15:03:00.080660] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.617 [2024-04-26 15:03:00.080667] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.617 [2024-04-26 15:03:00.084149] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.617 [2024-04-26 15:03:00.093167] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.617 [2024-04-26 15:03:00.093854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 15:03:00.094154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 15:03:00.094168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.617 [2024-04-26 15:03:00.094178] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.617 [2024-04-26 15:03:00.094413] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.617 [2024-04-26 15:03:00.094630] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.617 [2024-04-26 15:03:00.094639] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.617 [2024-04-26 15:03:00.094646] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.617 [2024-04-26 15:03:00.098128] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.617 [2024-04-26 15:03:00.106940] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.617 [2024-04-26 15:03:00.107506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 15:03:00.107615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 15:03:00.107627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.617 [2024-04-26 15:03:00.107635] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.617 [2024-04-26 15:03:00.107859] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.617 [2024-04-26 15:03:00.108074] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.617 [2024-04-26 15:03:00.108082] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.617 [2024-04-26 15:03:00.108090] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.617 [2024-04-26 15:03:00.111564] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.617 [2024-04-26 15:03:00.120789] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.617 [2024-04-26 15:03:00.121211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 15:03:00.121526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 15:03:00.121536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.617 [2024-04-26 15:03:00.121544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.617 [2024-04-26 15:03:00.121758] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.617 [2024-04-26 15:03:00.121977] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.617 [2024-04-26 15:03:00.121991] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.617 [2024-04-26 15:03:00.121998] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.617 [2024-04-26 15:03:00.125477] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.617 [2024-04-26 15:03:00.134519] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.617 [2024-04-26 15:03:00.135190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 15:03:00.135526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 15:03:00.135538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.617 [2024-04-26 15:03:00.135548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.617 [2024-04-26 15:03:00.135782] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.617 [2024-04-26 15:03:00.136007] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.617 [2024-04-26 15:03:00.136017] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.617 [2024-04-26 15:03:00.136025] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.617 [2024-04-26 15:03:00.139505] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.617 [2024-04-26 15:03:00.148322] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.617 [2024-04-26 15:03:00.148859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 15:03:00.149156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 15:03:00.149166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.617 [2024-04-26 15:03:00.149174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.617 [2024-04-26 15:03:00.149390] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.617 [2024-04-26 15:03:00.149605] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.617 [2024-04-26 15:03:00.149613] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.617 [2024-04-26 15:03:00.149620] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.617 [2024-04-26 15:03:00.153107] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.617 [2024-04-26 15:03:00.162139] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.617 [2024-04-26 15:03:00.162676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 15:03:00.162980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 15:03:00.162991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.617 [2024-04-26 15:03:00.162999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.617 [2024-04-26 15:03:00.163214] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.617 [2024-04-26 15:03:00.163428] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.617 [2024-04-26 15:03:00.163436] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.617 [2024-04-26 15:03:00.163447] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.617 [2024-04-26 15:03:00.166929] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.617 [2024-04-26 15:03:00.175954] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.617 [2024-04-26 15:03:00.176483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 15:03:00.176782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 15:03:00.176791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.617 [2024-04-26 15:03:00.176799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.617 [2024-04-26 15:03:00.177019] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.617 [2024-04-26 15:03:00.177234] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.617 [2024-04-26 15:03:00.177249] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.617 [2024-04-26 15:03:00.177256] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.617 [2024-04-26 15:03:00.180727] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.617 [2024-04-26 15:03:00.189745] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.617 [2024-04-26 15:03:00.190302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 15:03:00.190589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 15:03:00.190603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.617 [2024-04-26 15:03:00.190613] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.617 [2024-04-26 15:03:00.190862] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.617 [2024-04-26 15:03:00.191083] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.617 [2024-04-26 15:03:00.191092] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.617 [2024-04-26 15:03:00.191100] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.617 [2024-04-26 15:03:00.194578] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.617 [2024-04-26 15:03:00.203601] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.617 [2024-04-26 15:03:00.204312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.617 [2024-04-26 15:03:00.204703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 15:03:00.204717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.618 [2024-04-26 15:03:00.204727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.618 [2024-04-26 15:03:00.204969] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.618 [2024-04-26 15:03:00.205188] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.618 [2024-04-26 15:03:00.205197] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.618 [2024-04-26 15:03:00.205205] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.618 [2024-04-26 15:03:00.208689] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.618 [2024-04-26 15:03:00.217508] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.618 [2024-04-26 15:03:00.218100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 15:03:00.218451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 15:03:00.218462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.618 [2024-04-26 15:03:00.218470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.618 [2024-04-26 15:03:00.218686] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.618 [2024-04-26 15:03:00.218906] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.618 [2024-04-26 15:03:00.218915] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.618 [2024-04-26 15:03:00.218922] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.618 [2024-04-26 15:03:00.222402] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.618 [2024-04-26 15:03:00.231441] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.618 [2024-04-26 15:03:00.231856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 15:03:00.232181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 15:03:00.232192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.618 [2024-04-26 15:03:00.232200] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.618 [2024-04-26 15:03:00.232416] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.618 [2024-04-26 15:03:00.232631] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.618 [2024-04-26 15:03:00.232639] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.618 [2024-04-26 15:03:00.232647] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.618 [2024-04-26 15:03:00.236131] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.618 [2024-04-26 15:03:00.245445] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.618 [2024-04-26 15:03:00.245916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 15:03:00.246245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 15:03:00.246255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.618 [2024-04-26 15:03:00.246263] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.618 [2024-04-26 15:03:00.246478] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.618 [2024-04-26 15:03:00.246694] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.618 [2024-04-26 15:03:00.246701] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.618 [2024-04-26 15:03:00.246708] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.618 [2024-04-26 15:03:00.250191] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.618 [2024-04-26 15:03:00.259219] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.618 [2024-04-26 15:03:00.259754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 15:03:00.260064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 15:03:00.260075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.618 [2024-04-26 15:03:00.260082] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.618 [2024-04-26 15:03:00.260297] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.618 [2024-04-26 15:03:00.260512] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.618 [2024-04-26 15:03:00.260519] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.618 [2024-04-26 15:03:00.260526] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.618 [2024-04-26 15:03:00.264009] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.618 [2024-04-26 15:03:00.273030] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.618 [2024-04-26 15:03:00.273560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 15:03:00.273892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.618 [2024-04-26 15:03:00.273902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.618 [2024-04-26 15:03:00.273910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.618 [2024-04-26 15:03:00.274124] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.618 [2024-04-26 15:03:00.274339] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.618 [2024-04-26 15:03:00.274347] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.618 [2024-04-26 15:03:00.274353] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.618 [2024-04-26 15:03:00.277827] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.881 [2024-04-26 15:03:00.286849] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.881 [2024-04-26 15:03:00.287421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.881 [2024-04-26 15:03:00.287769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.881 [2024-04-26 15:03:00.287779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.881 [2024-04-26 15:03:00.287787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.881 [2024-04-26 15:03:00.288008] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.881 [2024-04-26 15:03:00.288222] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.881 [2024-04-26 15:03:00.288230] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.881 [2024-04-26 15:03:00.288237] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.881 [2024-04-26 15:03:00.291708] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.881 [2024-04-26 15:03:00.300729] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.881 [2024-04-26 15:03:00.301320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.881 [2024-04-26 15:03:00.301632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.881 [2024-04-26 15:03:00.301642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.881 [2024-04-26 15:03:00.301649] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.881 [2024-04-26 15:03:00.301869] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.881 [2024-04-26 15:03:00.302083] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.881 [2024-04-26 15:03:00.302091] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.881 [2024-04-26 15:03:00.302098] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.881 [2024-04-26 15:03:00.305570] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.881 [2024-04-26 15:03:00.314587] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.881 [2024-04-26 15:03:00.315135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.881 [2024-04-26 15:03:00.315441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.881 [2024-04-26 15:03:00.315451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.881 [2024-04-26 15:03:00.315458] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.881 [2024-04-26 15:03:00.315673] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.881 [2024-04-26 15:03:00.315892] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.881 [2024-04-26 15:03:00.315902] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.881 [2024-04-26 15:03:00.315909] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.881 [2024-04-26 15:03:00.319383] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.881 [2024-04-26 15:03:00.328403] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.881 [2024-04-26 15:03:00.328976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.881 [2024-04-26 15:03:00.329292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.881 [2024-04-26 15:03:00.329305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.881 [2024-04-26 15:03:00.329315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.881 [2024-04-26 15:03:00.329549] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.881 [2024-04-26 15:03:00.329767] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.881 [2024-04-26 15:03:00.329775] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.881 [2024-04-26 15:03:00.329782] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.882 [2024-04-26 15:03:00.333276] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.882 [2024-04-26 15:03:00.342287] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.882 [2024-04-26 15:03:00.342922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.882 [2024-04-26 15:03:00.343306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.882 [2024-04-26 15:03:00.343323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.882 [2024-04-26 15:03:00.343333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.882 [2024-04-26 15:03:00.343567] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.882 [2024-04-26 15:03:00.343784] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.882 [2024-04-26 15:03:00.343792] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.882 [2024-04-26 15:03:00.343800] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.882 [2024-04-26 15:03:00.347282] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.882 [2024-04-26 15:03:00.356085] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.882 [2024-04-26 15:03:00.356705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.882 [2024-04-26 15:03:00.357154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.882 [2024-04-26 15:03:00.357169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.882 [2024-04-26 15:03:00.357179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.882 [2024-04-26 15:03:00.357412] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.882 [2024-04-26 15:03:00.357630] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.882 [2024-04-26 15:03:00.357638] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.882 [2024-04-26 15:03:00.357645] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.882 [2024-04-26 15:03:00.361129] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.882 [2024-04-26 15:03:00.369946] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.882 [2024-04-26 15:03:00.370431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.882 [2024-04-26 15:03:00.370740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.882 [2024-04-26 15:03:00.370750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.882 [2024-04-26 15:03:00.370758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.882 [2024-04-26 15:03:00.370980] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.882 [2024-04-26 15:03:00.371195] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.882 [2024-04-26 15:03:00.371203] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.882 [2024-04-26 15:03:00.371211] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.882 [2024-04-26 15:03:00.374686] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.882 [2024-04-26 15:03:00.383701] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.882 [2024-04-26 15:03:00.384344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.882 [2024-04-26 15:03:00.384682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.882 [2024-04-26 15:03:00.384695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.882 [2024-04-26 15:03:00.384709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.882 [2024-04-26 15:03:00.384949] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.882 [2024-04-26 15:03:00.385169] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.882 [2024-04-26 15:03:00.385178] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.882 [2024-04-26 15:03:00.385185] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.882 [2024-04-26 15:03:00.388656] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.882 [2024-04-26 15:03:00.397458] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.882 [2024-04-26 15:03:00.397970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.882 [2024-04-26 15:03:00.398360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.882 [2024-04-26 15:03:00.398373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.882 [2024-04-26 15:03:00.398383] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.882 [2024-04-26 15:03:00.398617] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.882 [2024-04-26 15:03:00.398834] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.882 [2024-04-26 15:03:00.398849] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.882 [2024-04-26 15:03:00.398857] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.882 [2024-04-26 15:03:00.402328] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.882 [2024-04-26 15:03:00.411329] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.882 [2024-04-26 15:03:00.411865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.882 [2024-04-26 15:03:00.412191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.882 [2024-04-26 15:03:00.412201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.882 [2024-04-26 15:03:00.412208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.882 [2024-04-26 15:03:00.412423] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.882 [2024-04-26 15:03:00.412638] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.882 [2024-04-26 15:03:00.412646] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.882 [2024-04-26 15:03:00.412653] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.882 [2024-04-26 15:03:00.416126] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.882 [2024-04-26 15:03:00.425171] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.882 [2024-04-26 15:03:00.425611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.882 [2024-04-26 15:03:00.425851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.882 [2024-04-26 15:03:00.425862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.882 [2024-04-26 15:03:00.425869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.882 [2024-04-26 15:03:00.426089] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.882 [2024-04-26 15:03:00.426303] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.882 [2024-04-26 15:03:00.426311] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.882 [2024-04-26 15:03:00.426317] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.882 [2024-04-26 15:03:00.429787] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.882 [2024-04-26 15:03:00.439007] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.882 [2024-04-26 15:03:00.439583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.882 [2024-04-26 15:03:00.439971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.882 [2024-04-26 15:03:00.439981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.882 [2024-04-26 15:03:00.439989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.882 [2024-04-26 15:03:00.440204] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.882 [2024-04-26 15:03:00.440418] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.882 [2024-04-26 15:03:00.440426] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.882 [2024-04-26 15:03:00.440432] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.882 [2024-04-26 15:03:00.443903] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.882 [2024-04-26 15:03:00.452921] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.882 [2024-04-26 15:03:00.453454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.882 [2024-04-26 15:03:00.453639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.882 [2024-04-26 15:03:00.453649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.882 [2024-04-26 15:03:00.453656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.882 [2024-04-26 15:03:00.453877] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.882 [2024-04-26 15:03:00.454092] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.882 [2024-04-26 15:03:00.454101] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.882 [2024-04-26 15:03:00.454107] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.882 [2024-04-26 15:03:00.457585] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.882 [2024-04-26 15:03:00.466817] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.882 [2024-04-26 15:03:00.467390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.882 [2024-04-26 15:03:00.467737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.883 [2024-04-26 15:03:00.467747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.883 [2024-04-26 15:03:00.467755] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.883 [2024-04-26 15:03:00.467982] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.883 [2024-04-26 15:03:00.468201] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.883 [2024-04-26 15:03:00.468209] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.883 [2024-04-26 15:03:00.468216] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.883 [2024-04-26 15:03:00.471686] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.883 [2024-04-26 15:03:00.480703] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.883 [2024-04-26 15:03:00.481251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.883 [2024-04-26 15:03:00.481550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.883 [2024-04-26 15:03:00.481559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.883 [2024-04-26 15:03:00.481567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.883 [2024-04-26 15:03:00.481781] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.883 [2024-04-26 15:03:00.482002] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.883 [2024-04-26 15:03:00.482010] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.883 [2024-04-26 15:03:00.482017] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.883 [2024-04-26 15:03:00.485491] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.883 [2024-04-26 15:03:00.494492] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.883 [2024-04-26 15:03:00.494923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.883 [2024-04-26 15:03:00.495170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.883 [2024-04-26 15:03:00.495180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.883 [2024-04-26 15:03:00.495187] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.883 [2024-04-26 15:03:00.495402] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.883 [2024-04-26 15:03:00.495616] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.883 [2024-04-26 15:03:00.495624] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.883 [2024-04-26 15:03:00.495631] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.883 [2024-04-26 15:03:00.499109] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.883 [2024-04-26 15:03:00.508326] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.883 [2024-04-26 15:03:00.508898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.883 [2024-04-26 15:03:00.509218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.883 [2024-04-26 15:03:00.509228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.883 [2024-04-26 15:03:00.509235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.883 [2024-04-26 15:03:00.509450] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.883 [2024-04-26 15:03:00.509664] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.883 [2024-04-26 15:03:00.509676] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.883 [2024-04-26 15:03:00.509683] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.883 [2024-04-26 15:03:00.513162] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.883 [2024-04-26 15:03:00.522177] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.883 [2024-04-26 15:03:00.522641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.883 [2024-04-26 15:03:00.522960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.883 [2024-04-26 15:03:00.522970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.883 [2024-04-26 15:03:00.522978] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.883 [2024-04-26 15:03:00.523193] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.883 [2024-04-26 15:03:00.523406] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.883 [2024-04-26 15:03:00.523415] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.883 [2024-04-26 15:03:00.523422] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.883 [2024-04-26 15:03:00.526899] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.883 [2024-04-26 15:03:00.535931] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.883 [2024-04-26 15:03:00.536504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.883 [2024-04-26 15:03:00.536814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.883 [2024-04-26 15:03:00.536823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:17.883 [2024-04-26 15:03:00.536831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:17.883 [2024-04-26 15:03:00.537052] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:17.883 [2024-04-26 15:03:00.537266] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.883 [2024-04-26 15:03:00.537274] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.883 [2024-04-26 15:03:00.537281] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.883 [2024-04-26 15:03:00.540753] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.146 [2024-04-26 15:03:00.549763] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.146 [2024-04-26 15:03:00.550251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.146 [2024-04-26 15:03:00.550577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.146 [2024-04-26 15:03:00.550586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.146 [2024-04-26 15:03:00.550593] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.146 [2024-04-26 15:03:00.550808] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.146 [2024-04-26 15:03:00.551027] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.146 [2024-04-26 15:03:00.551035] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.146 [2024-04-26 15:03:00.551045] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.146 [2024-04-26 15:03:00.554518] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.146 [2024-04-26 15:03:00.563540] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.146 [2024-04-26 15:03:00.564118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.146 [2024-04-26 15:03:00.564448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.146 [2024-04-26 15:03:00.564457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.146 [2024-04-26 15:03:00.564464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.146 [2024-04-26 15:03:00.564679] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.146 [2024-04-26 15:03:00.564898] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.146 [2024-04-26 15:03:00.564906] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.146 [2024-04-26 15:03:00.564913] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.146 [2024-04-26 15:03:00.568393] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.146 [2024-04-26 15:03:00.577410] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.146 [2024-04-26 15:03:00.577960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.146 [2024-04-26 15:03:00.578343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.146 [2024-04-26 15:03:00.578356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.146 [2024-04-26 15:03:00.578365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.146 [2024-04-26 15:03:00.578599] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.146 [2024-04-26 15:03:00.578816] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.146 [2024-04-26 15:03:00.578825] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.146 [2024-04-26 15:03:00.578832] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.146 [2024-04-26 15:03:00.582316] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.146 [2024-04-26 15:03:00.591121] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.146 [2024-04-26 15:03:00.591540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.146 [2024-04-26 15:03:00.591787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.146 [2024-04-26 15:03:00.591797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.146 [2024-04-26 15:03:00.591806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.146 [2024-04-26 15:03:00.592026] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.146 [2024-04-26 15:03:00.592242] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.146 [2024-04-26 15:03:00.592250] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.146 [2024-04-26 15:03:00.592257] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.146 [2024-04-26 15:03:00.595731] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.146 [2024-04-26 15:03:00.604944] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.146 [2024-04-26 15:03:00.605577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.146 [2024-04-26 15:03:00.605925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.146 [2024-04-26 15:03:00.605940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.146 [2024-04-26 15:03:00.605950] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.146 [2024-04-26 15:03:00.606183] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.146 [2024-04-26 15:03:00.606401] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.146 [2024-04-26 15:03:00.606410] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.147 [2024-04-26 15:03:00.606418] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.147 [2024-04-26 15:03:00.609894] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.147 [2024-04-26 15:03:00.618739] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.147 [2024-04-26 15:03:00.619307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.147 [2024-04-26 15:03:00.619663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.147 [2024-04-26 15:03:00.619673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.147 [2024-04-26 15:03:00.619681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.147 [2024-04-26 15:03:00.619902] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.147 [2024-04-26 15:03:00.620118] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.147 [2024-04-26 15:03:00.620126] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.147 [2024-04-26 15:03:00.620133] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.147 [2024-04-26 15:03:00.623602] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.147 [2024-04-26 15:03:00.632618] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.147 [2024-04-26 15:03:00.633122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.147 [2024-04-26 15:03:00.633431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.147 [2024-04-26 15:03:00.633440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.147 [2024-04-26 15:03:00.633448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.147 [2024-04-26 15:03:00.633663] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.147 [2024-04-26 15:03:00.633881] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.147 [2024-04-26 15:03:00.633889] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.147 [2024-04-26 15:03:00.633896] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.147 [2024-04-26 15:03:00.637368] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.147 [2024-04-26 15:03:00.646379] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.147 [2024-04-26 15:03:00.646928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.147 [2024-04-26 15:03:00.647274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.147 [2024-04-26 15:03:00.647288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.147 [2024-04-26 15:03:00.647298] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.147 [2024-04-26 15:03:00.647531] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.147 [2024-04-26 15:03:00.647749] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.147 [2024-04-26 15:03:00.647757] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.147 [2024-04-26 15:03:00.647764] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.147 [2024-04-26 15:03:00.651245] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.147 [2024-04-26 15:03:00.660262] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.147 [2024-04-26 15:03:00.660933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.147 [2024-04-26 15:03:00.661347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.147 [2024-04-26 15:03:00.661359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.147 [2024-04-26 15:03:00.661369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.147 [2024-04-26 15:03:00.661603] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.147 [2024-04-26 15:03:00.661820] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.147 [2024-04-26 15:03:00.661828] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.147 [2024-04-26 15:03:00.661836] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.147 [2024-04-26 15:03:00.665318] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.147 [2024-04-26 15:03:00.674126] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.147 [2024-04-26 15:03:00.674805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.147 [2024-04-26 15:03:00.675170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.147 [2024-04-26 15:03:00.675183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.147 [2024-04-26 15:03:00.675193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.147 [2024-04-26 15:03:00.675427] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.147 [2024-04-26 15:03:00.675644] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.147 [2024-04-26 15:03:00.675652] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.147 [2024-04-26 15:03:00.675659] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.147 [2024-04-26 15:03:00.679138] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.147 [2024-04-26 15:03:00.687945] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.147 [2024-04-26 15:03:00.688547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.147 [2024-04-26 15:03:00.688886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.147 [2024-04-26 15:03:00.688901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.147 [2024-04-26 15:03:00.688911] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.147 [2024-04-26 15:03:00.689144] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.147 [2024-04-26 15:03:00.689362] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.147 [2024-04-26 15:03:00.689369] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.147 [2024-04-26 15:03:00.689377] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.147 [2024-04-26 15:03:00.692857] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.147 [2024-04-26 15:03:00.701863] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.147 [2024-04-26 15:03:00.702548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.147 [2024-04-26 15:03:00.702890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.147 [2024-04-26 15:03:00.702905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.147 [2024-04-26 15:03:00.702914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.147 [2024-04-26 15:03:00.703148] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.147 [2024-04-26 15:03:00.703365] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.147 [2024-04-26 15:03:00.703373] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.147 [2024-04-26 15:03:00.703380] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.147 [2024-04-26 15:03:00.706862] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.147 [2024-04-26 15:03:00.715666] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.147 [2024-04-26 15:03:00.716100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.147 [2024-04-26 15:03:00.716171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.147 [2024-04-26 15:03:00.716180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.147 [2024-04-26 15:03:00.716189] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.147 [2024-04-26 15:03:00.716404] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.147 [2024-04-26 15:03:00.716619] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.147 [2024-04-26 15:03:00.716628] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.147 [2024-04-26 15:03:00.716635] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.147 [2024-04-26 15:03:00.720110] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.147 [2024-04-26 15:03:00.729523] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.147 [2024-04-26 15:03:00.729935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.147 [2024-04-26 15:03:00.730263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.147 [2024-04-26 15:03:00.730277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.147 [2024-04-26 15:03:00.730285] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.147 [2024-04-26 15:03:00.730500] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.147 [2024-04-26 15:03:00.730714] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.147 [2024-04-26 15:03:00.730723] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.147 [2024-04-26 15:03:00.730730] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.147 [2024-04-26 15:03:00.734215] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.148 [2024-04-26 15:03:00.743422] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.148 [2024-04-26 15:03:00.743969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.148 [2024-04-26 15:03:00.744313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.148 [2024-04-26 15:03:00.744323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.148 [2024-04-26 15:03:00.744331] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.148 [2024-04-26 15:03:00.744546] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.148 [2024-04-26 15:03:00.744760] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.148 [2024-04-26 15:03:00.744769] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.148 [2024-04-26 15:03:00.744776] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.148 [2024-04-26 15:03:00.748252] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.148 [2024-04-26 15:03:00.757263] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.148 [2024-04-26 15:03:00.757842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.148 [2024-04-26 15:03:00.758151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.148 [2024-04-26 15:03:00.758161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.148 [2024-04-26 15:03:00.758169] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.148 [2024-04-26 15:03:00.758383] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.148 [2024-04-26 15:03:00.758598] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.148 [2024-04-26 15:03:00.758606] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.148 [2024-04-26 15:03:00.758613] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.148 [2024-04-26 15:03:00.762088] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.148 [2024-04-26 15:03:00.771092] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.148 [2024-04-26 15:03:00.771733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.148 [2024-04-26 15:03:00.772085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.148 [2024-04-26 15:03:00.772100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.148 [2024-04-26 15:03:00.772114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.148 [2024-04-26 15:03:00.772348] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.148 [2024-04-26 15:03:00.772566] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.148 [2024-04-26 15:03:00.772574] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.148 [2024-04-26 15:03:00.772581] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.148 [2024-04-26 15:03:00.776063] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.148 [2024-04-26 15:03:00.784865] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.148 [2024-04-26 15:03:00.785534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.148 [2024-04-26 15:03:00.785769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.148 [2024-04-26 15:03:00.785782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.148 [2024-04-26 15:03:00.785792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.148 [2024-04-26 15:03:00.786034] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.148 [2024-04-26 15:03:00.786253] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.148 [2024-04-26 15:03:00.786261] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.148 [2024-04-26 15:03:00.786268] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.148 [2024-04-26 15:03:00.789743] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.148 [2024-04-26 15:03:00.798745] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.148 [2024-04-26 15:03:00.799399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.148 [2024-04-26 15:03:00.799736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.148 [2024-04-26 15:03:00.799749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.148 [2024-04-26 15:03:00.799759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.148 [2024-04-26 15:03:00.799999] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.148 [2024-04-26 15:03:00.800217] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.148 [2024-04-26 15:03:00.800225] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.148 [2024-04-26 15:03:00.800232] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.148 [2024-04-26 15:03:00.803704] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.411 [2024-04-26 15:03:00.812507] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.411 [2024-04-26 15:03:00.812925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-04-26 15:03:00.813228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-04-26 15:03:00.813239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.411 [2024-04-26 15:03:00.813247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.411 [2024-04-26 15:03:00.813469] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.411 [2024-04-26 15:03:00.813685] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.411 [2024-04-26 15:03:00.813692] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.411 [2024-04-26 15:03:00.813699] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.411 [2024-04-26 15:03:00.817182] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.411 [2024-04-26 15:03:00.826394] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.411 [2024-04-26 15:03:00.827099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-04-26 15:03:00.827440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-04-26 15:03:00.827453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.411 [2024-04-26 15:03:00.827462] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.411 [2024-04-26 15:03:00.827696] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.411 [2024-04-26 15:03:00.827920] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.411 [2024-04-26 15:03:00.827929] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.411 [2024-04-26 15:03:00.827936] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.411 [2024-04-26 15:03:00.831423] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.411 [2024-04-26 15:03:00.840228] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.411 [2024-04-26 15:03:00.840921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-04-26 15:03:00.841269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-04-26 15:03:00.841282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.411 [2024-04-26 15:03:00.841292] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.411 [2024-04-26 15:03:00.841526] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.411 [2024-04-26 15:03:00.841743] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.411 [2024-04-26 15:03:00.841752] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.411 [2024-04-26 15:03:00.841759] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.411 [2024-04-26 15:03:00.845239] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.411 [2024-04-26 15:03:00.854045] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.411 [2024-04-26 15:03:00.854715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-04-26 15:03:00.855074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-04-26 15:03:00.855088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.411 [2024-04-26 15:03:00.855098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.411 [2024-04-26 15:03:00.855332] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.411 [2024-04-26 15:03:00.855554] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.411 [2024-04-26 15:03:00.855562] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.411 [2024-04-26 15:03:00.855570] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.411 [2024-04-26 15:03:00.859053] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.411 [2024-04-26 15:03:00.867861] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.411 [2024-04-26 15:03:00.868479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-04-26 15:03:00.868815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-04-26 15:03:00.868828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.411 [2024-04-26 15:03:00.868847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.411 [2024-04-26 15:03:00.869081] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.411 [2024-04-26 15:03:00.869299] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.411 [2024-04-26 15:03:00.869307] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.411 [2024-04-26 15:03:00.869315] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.411 [2024-04-26 15:03:00.872888] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.411 [2024-04-26 15:03:00.881695] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.411 [2024-04-26 15:03:00.882369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-04-26 15:03:00.882707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-04-26 15:03:00.882722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.411 [2024-04-26 15:03:00.882732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.411 [2024-04-26 15:03:00.882974] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.411 [2024-04-26 15:03:00.883192] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.411 [2024-04-26 15:03:00.883200] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.411 [2024-04-26 15:03:00.883207] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.411 [2024-04-26 15:03:00.886680] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.411 [2024-04-26 15:03:00.895484] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.411 [2024-04-26 15:03:00.896183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-04-26 15:03:00.896521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-04-26 15:03:00.896534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.411 [2024-04-26 15:03:00.896544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.411 [2024-04-26 15:03:00.896777] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.411 [2024-04-26 15:03:00.897003] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.411 [2024-04-26 15:03:00.897017] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.411 [2024-04-26 15:03:00.897024] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.411 [2024-04-26 15:03:00.900497] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.411 [2024-04-26 15:03:00.909301] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.411 [2024-04-26 15:03:00.909955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-04-26 15:03:00.910349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-04-26 15:03:00.910363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.411 [2024-04-26 15:03:00.910372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.411 [2024-04-26 15:03:00.910606] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.411 [2024-04-26 15:03:00.910824] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.411 [2024-04-26 15:03:00.910832] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.411 [2024-04-26 15:03:00.910846] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.411 [2024-04-26 15:03:00.914322] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.411 [2024-04-26 15:03:00.923130] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.411 [2024-04-26 15:03:00.923703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-04-26 15:03:00.924033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-04-26 15:03:00.924045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.411 [2024-04-26 15:03:00.924052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.411 [2024-04-26 15:03:00.924268] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.411 [2024-04-26 15:03:00.924482] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.411 [2024-04-26 15:03:00.924491] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.412 [2024-04-26 15:03:00.924497] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.412 [2024-04-26 15:03:00.927972] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.412 [2024-04-26 15:03:00.936992] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.412 [2024-04-26 15:03:00.937642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-04-26 15:03:00.937936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-04-26 15:03:00.937951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.412 [2024-04-26 15:03:00.937960] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.412 [2024-04-26 15:03:00.938194] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.412 [2024-04-26 15:03:00.938412] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.412 [2024-04-26 15:03:00.938420] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.412 [2024-04-26 15:03:00.938432] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.412 [2024-04-26 15:03:00.941910] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.412 [2024-04-26 15:03:00.950714] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.412 [2024-04-26 15:03:00.951374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-04-26 15:03:00.951723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-04-26 15:03:00.951735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.412 [2024-04-26 15:03:00.951745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.412 [2024-04-26 15:03:00.951987] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.412 [2024-04-26 15:03:00.952205] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.412 [2024-04-26 15:03:00.952213] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.412 [2024-04-26 15:03:00.952220] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.412 [2024-04-26 15:03:00.955695] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.412 [2024-04-26 15:03:00.964510] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.412 [2024-04-26 15:03:00.965128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-04-26 15:03:00.965409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-04-26 15:03:00.965422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.412 [2024-04-26 15:03:00.965432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.412 [2024-04-26 15:03:00.965666] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.412 [2024-04-26 15:03:00.965890] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.412 [2024-04-26 15:03:00.965899] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.412 [2024-04-26 15:03:00.965906] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.412 [2024-04-26 15:03:00.969381] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.412 [2024-04-26 15:03:00.978396] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.412 [2024-04-26 15:03:00.978961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-04-26 15:03:00.979313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-04-26 15:03:00.979326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.412 [2024-04-26 15:03:00.979337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.412 [2024-04-26 15:03:00.979571] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.412 [2024-04-26 15:03:00.979790] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.412 [2024-04-26 15:03:00.979799] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.412 [2024-04-26 15:03:00.979806] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.412 [2024-04-26 15:03:00.983292] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.412 [2024-04-26 15:03:00.992304] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.412 [2024-04-26 15:03:00.992984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-04-26 15:03:00.993312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-04-26 15:03:00.993325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.412 [2024-04-26 15:03:00.993335] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.412 [2024-04-26 15:03:00.993569] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.412 [2024-04-26 15:03:00.993786] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.412 [2024-04-26 15:03:00.993795] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.412 [2024-04-26 15:03:00.993802] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.412 [2024-04-26 15:03:00.997285] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.412 [2024-04-26 15:03:01.006089] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.412 [2024-04-26 15:03:01.006713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-04-26 15:03:01.007049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-04-26 15:03:01.007064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.412 [2024-04-26 15:03:01.007074] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.412 [2024-04-26 15:03:01.007308] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.412 [2024-04-26 15:03:01.007526] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.412 [2024-04-26 15:03:01.007535] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.412 [2024-04-26 15:03:01.007543] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.412 [2024-04-26 15:03:01.011021] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.412 [2024-04-26 15:03:01.019823] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.412 [2024-04-26 15:03:01.020369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-04-26 15:03:01.020688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-04-26 15:03:01.020698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.412 [2024-04-26 15:03:01.020706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.412 [2024-04-26 15:03:01.020926] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.412 [2024-04-26 15:03:01.021141] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.412 [2024-04-26 15:03:01.021149] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.412 [2024-04-26 15:03:01.021157] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.412 [2024-04-26 15:03:01.024628] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.412 [2024-04-26 15:03:01.033648] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.412 [2024-04-26 15:03:01.034138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-04-26 15:03:01.034367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-04-26 15:03:01.034380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.412 [2024-04-26 15:03:01.034390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.412 [2024-04-26 15:03:01.034624] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.412 [2024-04-26 15:03:01.034849] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.412 [2024-04-26 15:03:01.034858] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.412 [2024-04-26 15:03:01.034865] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.412 [2024-04-26 15:03:01.038341] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.412 [2024-04-26 15:03:01.047558] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.412 [2024-04-26 15:03:01.048217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-04-26 15:03:01.048553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-04-26 15:03:01.048566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.412 [2024-04-26 15:03:01.048575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.412 [2024-04-26 15:03:01.048809] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.412 [2024-04-26 15:03:01.049033] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.412 [2024-04-26 15:03:01.049044] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.412 [2024-04-26 15:03:01.049052] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.412 [2024-04-26 15:03:01.052526] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.412 [2024-04-26 15:03:01.061337] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.412 [2024-04-26 15:03:01.061942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-04-26 15:03:01.062308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.413 [2024-04-26 15:03:01.062322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.413 [2024-04-26 15:03:01.062332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.413 [2024-04-26 15:03:01.062566] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.413 [2024-04-26 15:03:01.062784] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.413 [2024-04-26 15:03:01.062792] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.413 [2024-04-26 15:03:01.062799] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.413 [2024-04-26 15:03:01.066280] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.676 [2024-04-26 15:03:01.075087] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.676 [2024-04-26 15:03:01.075627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-04-26 15:03:01.075955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.676 [2024-04-26 15:03:01.075966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.676 [2024-04-26 15:03:01.075975] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.676 [2024-04-26 15:03:01.076190] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.676 [2024-04-26 15:03:01.076405] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.676 [2024-04-26 15:03:01.076413] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.677 [2024-04-26 15:03:01.076420] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.677 [2024-04-26 15:03:01.079893] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.677 [2024-04-26 15:03:01.088900] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.677 [2024-04-26 15:03:01.089524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-04-26 15:03:01.089867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-04-26 15:03:01.089882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.677 [2024-04-26 15:03:01.089892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.677 [2024-04-26 15:03:01.090126] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.677 [2024-04-26 15:03:01.090343] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.677 [2024-04-26 15:03:01.090353] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.677 [2024-04-26 15:03:01.090360] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.677 [2024-04-26 15:03:01.093836] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.677 [2024-04-26 15:03:01.102641] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.677 [2024-04-26 15:03:01.103299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-04-26 15:03:01.103629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-04-26 15:03:01.103642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.677 [2024-04-26 15:03:01.103652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.677 [2024-04-26 15:03:01.103892] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.677 [2024-04-26 15:03:01.104110] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.677 [2024-04-26 15:03:01.104120] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.677 [2024-04-26 15:03:01.104127] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.677 [2024-04-26 15:03:01.107602] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.677 [2024-04-26 15:03:01.116406] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.677 [2024-04-26 15:03:01.117071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-04-26 15:03:01.117317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-04-26 15:03:01.117343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.677 [2024-04-26 15:03:01.117353] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.677 [2024-04-26 15:03:01.117587] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.677 [2024-04-26 15:03:01.117804] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.677 [2024-04-26 15:03:01.117812] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.677 [2024-04-26 15:03:01.117819] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.677 [2024-04-26 15:03:01.121302] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.677 [2024-04-26 15:03:01.130312] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.677 [2024-04-26 15:03:01.130945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-04-26 15:03:01.131294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-04-26 15:03:01.131307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.677 [2024-04-26 15:03:01.131316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.677 [2024-04-26 15:03:01.131549] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.677 [2024-04-26 15:03:01.131767] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.677 [2024-04-26 15:03:01.131783] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.677 [2024-04-26 15:03:01.131791] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.677 [2024-04-26 15:03:01.135281] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.677 [2024-04-26 15:03:01.144086] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.677 [2024-04-26 15:03:01.144667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-04-26 15:03:01.144982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-04-26 15:03:01.144992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.677 [2024-04-26 15:03:01.145000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.677 [2024-04-26 15:03:01.145216] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.677 [2024-04-26 15:03:01.145431] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.677 [2024-04-26 15:03:01.145439] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.677 [2024-04-26 15:03:01.145446] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.677 [2024-04-26 15:03:01.148921] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.677 [2024-04-26 15:03:01.157932] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.677 [2024-04-26 15:03:01.158571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-04-26 15:03:01.158906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-04-26 15:03:01.158929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.677 [2024-04-26 15:03:01.158943] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.677 [2024-04-26 15:03:01.159176] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.677 [2024-04-26 15:03:01.159394] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.677 [2024-04-26 15:03:01.159403] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.677 [2024-04-26 15:03:01.159410] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.677 [2024-04-26 15:03:01.162889] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.677 [2024-04-26 15:03:01.171689] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.677 [2024-04-26 15:03:01.172352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-04-26 15:03:01.172691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-04-26 15:03:01.172704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.677 [2024-04-26 15:03:01.172714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.677 [2024-04-26 15:03:01.172954] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.677 [2024-04-26 15:03:01.173172] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.677 [2024-04-26 15:03:01.173181] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.677 [2024-04-26 15:03:01.173189] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.677 [2024-04-26 15:03:01.176663] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.677 [2024-04-26 15:03:01.185465] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.677 [2024-04-26 15:03:01.186108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-04-26 15:03:01.186382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-04-26 15:03:01.186395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.677 [2024-04-26 15:03:01.186405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.677 [2024-04-26 15:03:01.186639] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.677 [2024-04-26 15:03:01.186864] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.677 [2024-04-26 15:03:01.186872] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.677 [2024-04-26 15:03:01.186880] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.677 [2024-04-26 15:03:01.190355] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.677 [2024-04-26 15:03:01.199369] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.677 [2024-04-26 15:03:01.199953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-04-26 15:03:01.200222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.677 [2024-04-26 15:03:01.200234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.677 [2024-04-26 15:03:01.200244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.677 [2024-04-26 15:03:01.200482] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.677 [2024-04-26 15:03:01.200700] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.677 [2024-04-26 15:03:01.200708] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.677 [2024-04-26 15:03:01.200716] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.677 [2024-04-26 15:03:01.204199] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.678 [2024-04-26 15:03:01.213212] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.678 [2024-04-26 15:03:01.213902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-04-26 15:03:01.214273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-04-26 15:03:01.214285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.678 [2024-04-26 15:03:01.214295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.678 [2024-04-26 15:03:01.214528] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.678 [2024-04-26 15:03:01.214745] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.678 [2024-04-26 15:03:01.214754] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.678 [2024-04-26 15:03:01.214761] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.678 [2024-04-26 15:03:01.218241] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.678 [2024-04-26 15:03:01.227047] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.678 [2024-04-26 15:03:01.227616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-04-26 15:03:01.227962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-04-26 15:03:01.227974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.678 [2024-04-26 15:03:01.227983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.678 [2024-04-26 15:03:01.228199] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.678 [2024-04-26 15:03:01.228415] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.678 [2024-04-26 15:03:01.228423] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.678 [2024-04-26 15:03:01.228430] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.678 [2024-04-26 15:03:01.231913] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.678 [2024-04-26 15:03:01.240926] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.678 [2024-04-26 15:03:01.241576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-04-26 15:03:01.241917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-04-26 15:03:01.241932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.678 [2024-04-26 15:03:01.241943] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.678 [2024-04-26 15:03:01.242178] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.678 [2024-04-26 15:03:01.242401] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.678 [2024-04-26 15:03:01.242411] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.678 [2024-04-26 15:03:01.242418] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.678 [2024-04-26 15:03:01.245902] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.678 [2024-04-26 15:03:01.254701] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.678 [2024-04-26 15:03:01.255243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-04-26 15:03:01.255473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-04-26 15:03:01.255484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.678 [2024-04-26 15:03:01.255492] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.678 [2024-04-26 15:03:01.255707] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.678 [2024-04-26 15:03:01.255927] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.678 [2024-04-26 15:03:01.255935] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.678 [2024-04-26 15:03:01.255942] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.678 [2024-04-26 15:03:01.259418] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.678 [2024-04-26 15:03:01.268425] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.678 [2024-04-26 15:03:01.268959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-04-26 15:03:01.269353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-04-26 15:03:01.269366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.678 [2024-04-26 15:03:01.269376] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.678 [2024-04-26 15:03:01.269609] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.678 [2024-04-26 15:03:01.269827] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.678 [2024-04-26 15:03:01.269835] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.678 [2024-04-26 15:03:01.269850] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.678 [2024-04-26 15:03:01.273325] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.678 [2024-04-26 15:03:01.282333] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.678 [2024-04-26 15:03:01.282923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-04-26 15:03:01.283155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-04-26 15:03:01.283169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.678 [2024-04-26 15:03:01.283178] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.678 [2024-04-26 15:03:01.283412] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.678 [2024-04-26 15:03:01.283630] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.678 [2024-04-26 15:03:01.283642] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.678 [2024-04-26 15:03:01.283649] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.678 [2024-04-26 15:03:01.287135] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.678 [2024-04-26 15:03:01.296233] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.678 [2024-04-26 15:03:01.296778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-04-26 15:03:01.297123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-04-26 15:03:01.297134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.678 [2024-04-26 15:03:01.297142] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.678 [2024-04-26 15:03:01.297358] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.678 [2024-04-26 15:03:01.297573] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.678 [2024-04-26 15:03:01.297580] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.678 [2024-04-26 15:03:01.297588] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.678 [2024-04-26 15:03:01.301060] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.678 [2024-04-26 15:03:01.310068] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.678 [2024-04-26 15:03:01.310690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-04-26 15:03:01.310977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-04-26 15:03:01.310992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.678 [2024-04-26 15:03:01.311002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.678 [2024-04-26 15:03:01.311236] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.678 [2024-04-26 15:03:01.311455] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.678 [2024-04-26 15:03:01.311463] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.678 [2024-04-26 15:03:01.311471] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.678 [2024-04-26 15:03:01.314951] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.678 [2024-04-26 15:03:01.323962] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.678 [2024-04-26 15:03:01.324592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-04-26 15:03:01.324879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-04-26 15:03:01.324893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.678 [2024-04-26 15:03:01.324903] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.678 [2024-04-26 15:03:01.325137] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.678 [2024-04-26 15:03:01.325355] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.678 [2024-04-26 15:03:01.325364] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.678 [2024-04-26 15:03:01.325378] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.678 [2024-04-26 15:03:01.328859] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.678 [2024-04-26 15:03:01.337671] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.678 [2024-04-26 15:03:01.338288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.678 [2024-04-26 15:03:01.338613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.679 [2024-04-26 15:03:01.338626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.679 [2024-04-26 15:03:01.338636] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.679 [2024-04-26 15:03:01.338876] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.679 [2024-04-26 15:03:01.339094] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.679 [2024-04-26 15:03:01.339102] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.679 [2024-04-26 15:03:01.339110] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.958 [2024-04-26 15:03:01.342586] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.958 [2024-04-26 15:03:01.351390] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.958 [2024-04-26 15:03:01.351948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.958 [2024-04-26 15:03:01.352270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.958 [2024-04-26 15:03:01.352279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.958 [2024-04-26 15:03:01.352287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.958 [2024-04-26 15:03:01.352503] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.958 [2024-04-26 15:03:01.352717] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.958 [2024-04-26 15:03:01.352726] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.958 [2024-04-26 15:03:01.352733] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.958 [2024-04-26 15:03:01.356211] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.958 [2024-04-26 15:03:01.365223] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.958 [2024-04-26 15:03:01.365758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.958 [2024-04-26 15:03:01.366085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.958 [2024-04-26 15:03:01.366095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.958 [2024-04-26 15:03:01.366103] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.958 [2024-04-26 15:03:01.366318] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.958 [2024-04-26 15:03:01.366532] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.958 [2024-04-26 15:03:01.366541] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.958 [2024-04-26 15:03:01.366548] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.958 [2024-04-26 15:03:01.370025] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.958 [2024-04-26 15:03:01.379028] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.958 [2024-04-26 15:03:01.379605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.958 [2024-04-26 15:03:01.379952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.958 [2024-04-26 15:03:01.379962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.958 [2024-04-26 15:03:01.379970] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.958 [2024-04-26 15:03:01.380184] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.958 [2024-04-26 15:03:01.380399] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.958 [2024-04-26 15:03:01.380408] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.958 [2024-04-26 15:03:01.380415] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.958 [2024-04-26 15:03:01.383888] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.958 [2024-04-26 15:03:01.392896] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.958 [2024-04-26 15:03:01.393426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.958 [2024-04-26 15:03:01.393738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.958 [2024-04-26 15:03:01.393748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.958 [2024-04-26 15:03:01.393755] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.958 [2024-04-26 15:03:01.393975] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.958 [2024-04-26 15:03:01.394189] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.958 [2024-04-26 15:03:01.394198] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.958 [2024-04-26 15:03:01.394205] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.958 [2024-04-26 15:03:01.397676] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.958 [2024-04-26 15:03:01.406682] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.958 [2024-04-26 15:03:01.407231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.958 [2024-04-26 15:03:01.407551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.958 [2024-04-26 15:03:01.407561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.958 [2024-04-26 15:03:01.407568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.958 [2024-04-26 15:03:01.407783] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.958 [2024-04-26 15:03:01.408001] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.958 [2024-04-26 15:03:01.408016] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.958 [2024-04-26 15:03:01.408023] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.958 [2024-04-26 15:03:01.411491] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.958 [2024-04-26 15:03:01.420495] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.958 [2024-04-26 15:03:01.421180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.958 [2024-04-26 15:03:01.421520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.958 [2024-04-26 15:03:01.421533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.958 [2024-04-26 15:03:01.421542] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.958 [2024-04-26 15:03:01.421776] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.958 [2024-04-26 15:03:01.422002] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.958 [2024-04-26 15:03:01.422012] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.958 [2024-04-26 15:03:01.422019] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.958 [2024-04-26 15:03:01.425493] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.958 [2024-04-26 15:03:01.434308] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.958 [2024-04-26 15:03:01.434875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.958 [2024-04-26 15:03:01.435085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.959 [2024-04-26 15:03:01.435100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.959 [2024-04-26 15:03:01.435110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.959 [2024-04-26 15:03:01.435344] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.959 [2024-04-26 15:03:01.435563] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.959 [2024-04-26 15:03:01.435571] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.959 [2024-04-26 15:03:01.435578] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.959 [2024-04-26 15:03:01.439060] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.959 [2024-04-26 15:03:01.448073] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.959 [2024-04-26 15:03:01.448749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.959 [2024-04-26 15:03:01.449119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.959 [2024-04-26 15:03:01.449134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.959 [2024-04-26 15:03:01.449144] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.959 [2024-04-26 15:03:01.449378] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.959 [2024-04-26 15:03:01.449595] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.959 [2024-04-26 15:03:01.449604] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.959 [2024-04-26 15:03:01.449611] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.959 [2024-04-26 15:03:01.453089] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.959 [2024-04-26 15:03:01.461901] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.959 [2024-04-26 15:03:01.462365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.959 [2024-04-26 15:03:01.462673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.959 [2024-04-26 15:03:01.462683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.959 [2024-04-26 15:03:01.462691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.959 [2024-04-26 15:03:01.462913] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.959 [2024-04-26 15:03:01.463129] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.959 [2024-04-26 15:03:01.463136] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.959 [2024-04-26 15:03:01.463143] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.959 [2024-04-26 15:03:01.466611] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.959 [2024-04-26 15:03:01.475622] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.959 [2024-04-26 15:03:01.476246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.959 [2024-04-26 15:03:01.476553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.959 [2024-04-26 15:03:01.476567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.959 [2024-04-26 15:03:01.476577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.959 [2024-04-26 15:03:01.476811] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.959 [2024-04-26 15:03:01.477036] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.959 [2024-04-26 15:03:01.477046] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.959 [2024-04-26 15:03:01.477054] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.959 [2024-04-26 15:03:01.480531] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.959 [2024-04-26 15:03:01.489336] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.959 [2024-04-26 15:03:01.489927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.959 [2024-04-26 15:03:01.490258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.959 [2024-04-26 15:03:01.490272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.959 [2024-04-26 15:03:01.490281] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.959 [2024-04-26 15:03:01.490516] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.959 [2024-04-26 15:03:01.490734] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.959 [2024-04-26 15:03:01.490742] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.959 [2024-04-26 15:03:01.490749] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.959 [2024-04-26 15:03:01.494227] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.959 [2024-04-26 15:03:01.503252] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.959 [2024-04-26 15:03:01.503826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.959 [2024-04-26 15:03:01.504079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.959 [2024-04-26 15:03:01.504095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.959 [2024-04-26 15:03:01.504103] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.959 [2024-04-26 15:03:01.504319] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.959 [2024-04-26 15:03:01.504533] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.959 [2024-04-26 15:03:01.504540] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.959 [2024-04-26 15:03:01.504547] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.959 [2024-04-26 15:03:01.508022] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.959 [2024-04-26 15:03:01.517058] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.959 [2024-04-26 15:03:01.517697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.959 [2024-04-26 15:03:01.518037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.959 [2024-04-26 15:03:01.518053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.959 [2024-04-26 15:03:01.518062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.959 [2024-04-26 15:03:01.518296] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.959 [2024-04-26 15:03:01.518514] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.959 [2024-04-26 15:03:01.518523] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.959 [2024-04-26 15:03:01.518530] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.959 [2024-04-26 15:03:01.522012] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.959 [2024-04-26 15:03:01.530813] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.959 [2024-04-26 15:03:01.531382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.959 [2024-04-26 15:03:01.531723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.959 [2024-04-26 15:03:01.531737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.959 [2024-04-26 15:03:01.531747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.959 [2024-04-26 15:03:01.531997] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.959 [2024-04-26 15:03:01.532215] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.959 [2024-04-26 15:03:01.532224] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.959 [2024-04-26 15:03:01.532232] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.959 [2024-04-26 15:03:01.535706] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.959 [2024-04-26 15:03:01.544719] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.959 [2024-04-26 15:03:01.545404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.959 [2024-04-26 15:03:01.545744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.959 [2024-04-26 15:03:01.545757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.959 [2024-04-26 15:03:01.545771] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.959 [2024-04-26 15:03:01.546012] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.959 [2024-04-26 15:03:01.546231] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.959 [2024-04-26 15:03:01.546240] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.959 [2024-04-26 15:03:01.546248] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.959 [2024-04-26 15:03:01.549720] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.959 [2024-04-26 15:03:01.558534] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.959 [2024-04-26 15:03:01.559165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.959 [2024-04-26 15:03:01.559467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.959 [2024-04-26 15:03:01.559477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.959 [2024-04-26 15:03:01.559485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.959 [2024-04-26 15:03:01.559701] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.960 [2024-04-26 15:03:01.559921] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.960 [2024-04-26 15:03:01.559929] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.960 [2024-04-26 15:03:01.559936] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.960 [2024-04-26 15:03:01.563409] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.960 [2024-04-26 15:03:01.572416] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.960 [2024-04-26 15:03:01.573085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.960 [2024-04-26 15:03:01.573417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.960 [2024-04-26 15:03:01.573431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.960 [2024-04-26 15:03:01.573440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.960 [2024-04-26 15:03:01.573674] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.960 [2024-04-26 15:03:01.573897] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.960 [2024-04-26 15:03:01.573906] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.960 [2024-04-26 15:03:01.573913] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.960 [2024-04-26 15:03:01.577431] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.960 [2024-04-26 15:03:01.586239] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.960 [2024-04-26 15:03:01.586779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.960 [2024-04-26 15:03:01.587018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.960 [2024-04-26 15:03:01.587036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.960 [2024-04-26 15:03:01.587044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.960 [2024-04-26 15:03:01.587265] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.960 [2024-04-26 15:03:01.587480] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.960 [2024-04-26 15:03:01.587487] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.960 [2024-04-26 15:03:01.587494] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.960 [2024-04-26 15:03:01.590971] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.960 [2024-04-26 15:03:01.599983] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.960 [2024-04-26 15:03:01.600612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.960 [2024-04-26 15:03:01.600948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.960 [2024-04-26 15:03:01.600962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.960 [2024-04-26 15:03:01.600972] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.960 [2024-04-26 15:03:01.601205] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.960 [2024-04-26 15:03:01.601423] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.960 [2024-04-26 15:03:01.601432] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.960 [2024-04-26 15:03:01.601439] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.960 [2024-04-26 15:03:01.604917] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.960 [2024-04-26 15:03:01.613722] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.960 [2024-04-26 15:03:01.614179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.960 [2024-04-26 15:03:01.614416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.960 [2024-04-26 15:03:01.614426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:18.960 [2024-04-26 15:03:01.614435] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:18.960 [2024-04-26 15:03:01.614652] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:18.960 [2024-04-26 15:03:01.614871] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.960 [2024-04-26 15:03:01.614879] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.960 [2024-04-26 15:03:01.614886] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.230 [2024-04-26 15:03:01.618354] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.230 [2024-04-26 15:03:01.627565] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.230 [2024-04-26 15:03:01.628021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.230 [2024-04-26 15:03:01.628346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.230 [2024-04-26 15:03:01.628357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.230 [2024-04-26 15:03:01.628364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.230 [2024-04-26 15:03:01.628579] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.230 [2024-04-26 15:03:01.628798] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.230 [2024-04-26 15:03:01.628806] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.230 [2024-04-26 15:03:01.628813] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.230 [2024-04-26 15:03:01.632297] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.230 [2024-04-26 15:03:01.641305] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.230 [2024-04-26 15:03:01.641739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.230 [2024-04-26 15:03:01.641851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.230 [2024-04-26 15:03:01.641861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.230 [2024-04-26 15:03:01.641869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.230 [2024-04-26 15:03:01.642083] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.231 [2024-04-26 15:03:01.642298] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.231 [2024-04-26 15:03:01.642305] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.231 [2024-04-26 15:03:01.642312] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.231 [2024-04-26 15:03:01.645783] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.231 [2024-04-26 15:03:01.655196] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.231 [2024-04-26 15:03:01.655727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.231 [2024-04-26 15:03:01.656012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.231 [2024-04-26 15:03:01.656024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.231 [2024-04-26 15:03:01.656032] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.231 [2024-04-26 15:03:01.656246] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.231 [2024-04-26 15:03:01.656461] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.231 [2024-04-26 15:03:01.656469] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.231 [2024-04-26 15:03:01.656476] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.231 [2024-04-26 15:03:01.659954] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.231 [2024-04-26 15:03:01.668962] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.231 [2024-04-26 15:03:01.669530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.231 [2024-04-26 15:03:01.669847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.231 [2024-04-26 15:03:01.669857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.231 [2024-04-26 15:03:01.669865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.231 [2024-04-26 15:03:01.670080] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.231 [2024-04-26 15:03:01.670294] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.231 [2024-04-26 15:03:01.670306] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.231 [2024-04-26 15:03:01.670313] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.231 [2024-04-26 15:03:01.673783] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.231 [2024-04-26 15:03:01.682789] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.231 [2024-04-26 15:03:01.683277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.231 [2024-04-26 15:03:01.683474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.231 [2024-04-26 15:03:01.683485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.231 [2024-04-26 15:03:01.683493] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.231 [2024-04-26 15:03:01.683709] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.231 [2024-04-26 15:03:01.683927] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.231 [2024-04-26 15:03:01.683935] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.231 [2024-04-26 15:03:01.683942] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.231 [2024-04-26 15:03:01.687414] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.231 [2024-04-26 15:03:01.696623] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.231 [2024-04-26 15:03:01.697104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.231 [2024-04-26 15:03:01.697397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.231 [2024-04-26 15:03:01.697407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.231 [2024-04-26 15:03:01.697415] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.231 [2024-04-26 15:03:01.697629] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.231 [2024-04-26 15:03:01.697849] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.231 [2024-04-26 15:03:01.697857] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.231 [2024-04-26 15:03:01.697864] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.231 [2024-04-26 15:03:01.701333] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.231 [2024-04-26 15:03:01.710335] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.231 [2024-04-26 15:03:01.710801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.231 [2024-04-26 15:03:01.711096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.231 [2024-04-26 15:03:01.711107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.231 [2024-04-26 15:03:01.711115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.231 [2024-04-26 15:03:01.711329] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.231 [2024-04-26 15:03:01.711543] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.231 [2024-04-26 15:03:01.711551] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.231 [2024-04-26 15:03:01.711562] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.231 [2024-04-26 15:03:01.715037] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.231 [2024-04-26 15:03:01.724042] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.231 [2024-04-26 15:03:01.724595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.231 [2024-04-26 15:03:01.724914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.231 [2024-04-26 15:03:01.724924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.231 [2024-04-26 15:03:01.724932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.231 [2024-04-26 15:03:01.725147] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.231 [2024-04-26 15:03:01.725361] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.231 [2024-04-26 15:03:01.725368] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.231 [2024-04-26 15:03:01.725375] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.231 [2024-04-26 15:03:01.728849] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.231 [2024-04-26 15:03:01.737862] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.231 [2024-04-26 15:03:01.738427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.231 [2024-04-26 15:03:01.738736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.231 [2024-04-26 15:03:01.738746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.231 [2024-04-26 15:03:01.738754] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.231 [2024-04-26 15:03:01.738973] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.231 [2024-04-26 15:03:01.739188] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.231 [2024-04-26 15:03:01.739196] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.232 [2024-04-26 15:03:01.739202] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.232 [2024-04-26 15:03:01.742668] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.232 [2024-04-26 15:03:01.751670] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.232 [2024-04-26 15:03:01.752206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.232 [2024-04-26 15:03:01.752561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.232 [2024-04-26 15:03:01.752570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.232 [2024-04-26 15:03:01.752578] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.232 [2024-04-26 15:03:01.752792] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.232 [2024-04-26 15:03:01.753010] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.232 [2024-04-26 15:03:01.753018] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.232 [2024-04-26 15:03:01.753025] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.232 [2024-04-26 15:03:01.756498] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.232 [2024-04-26 15:03:01.765507] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.232 [2024-04-26 15:03:01.766061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.232 [2024-04-26 15:03:01.766370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.232 [2024-04-26 15:03:01.766380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.232 [2024-04-26 15:03:01.766387] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.232 [2024-04-26 15:03:01.766602] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.232 [2024-04-26 15:03:01.766816] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.232 [2024-04-26 15:03:01.766824] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.232 [2024-04-26 15:03:01.766831] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.232 [2024-04-26 15:03:01.770309] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.232 [2024-04-26 15:03:01.779315] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.232 [2024-04-26 15:03:01.779847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.232 [2024-04-26 15:03:01.780163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.232 [2024-04-26 15:03:01.780173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.232 [2024-04-26 15:03:01.780181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.232 [2024-04-26 15:03:01.780395] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.232 [2024-04-26 15:03:01.780609] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.232 [2024-04-26 15:03:01.780616] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.232 [2024-04-26 15:03:01.780623] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.232 [2024-04-26 15:03:01.784099] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.232 [2024-04-26 15:03:01.793104] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.232 [2024-04-26 15:03:01.793765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.232 [2024-04-26 15:03:01.794135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.232 [2024-04-26 15:03:01.794150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.232 [2024-04-26 15:03:01.794159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.232 [2024-04-26 15:03:01.794393] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.232 [2024-04-26 15:03:01.794610] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.232 [2024-04-26 15:03:01.794619] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.232 [2024-04-26 15:03:01.794626] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.232 [2024-04-26 15:03:01.798102] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.232 [2024-04-26 15:03:01.806912] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.232 [2024-04-26 15:03:01.807496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.232 [2024-04-26 15:03:01.807803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.232 [2024-04-26 15:03:01.807814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.232 [2024-04-26 15:03:01.807821] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.232 [2024-04-26 15:03:01.808041] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.232 [2024-04-26 15:03:01.808257] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.232 [2024-04-26 15:03:01.808273] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.232 [2024-04-26 15:03:01.808280] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.232 [2024-04-26 15:03:01.811750] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.232 [2024-04-26 15:03:01.820756] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.232 [2024-04-26 15:03:01.821316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.232 [2024-04-26 15:03:01.821637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.232 [2024-04-26 15:03:01.821648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.232 [2024-04-26 15:03:01.821656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.232 [2024-04-26 15:03:01.821876] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.232 [2024-04-26 15:03:01.822091] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.232 [2024-04-26 15:03:01.822099] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.232 [2024-04-26 15:03:01.822106] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.232 [2024-04-26 15:03:01.825573] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.232 [2024-04-26 15:03:01.834621] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.232 [2024-04-26 15:03:01.835177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.232 [2024-04-26 15:03:01.835532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.232 [2024-04-26 15:03:01.835541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.232 [2024-04-26 15:03:01.835549] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.232 [2024-04-26 15:03:01.835764] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.232 [2024-04-26 15:03:01.835982] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.232 [2024-04-26 15:03:01.835991] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.232 [2024-04-26 15:03:01.835998] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.232 [2024-04-26 15:03:01.839468] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.232 [2024-04-26 15:03:01.848469] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.232 [2024-04-26 15:03:01.848919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.233 [2024-04-26 15:03:01.849251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.233 [2024-04-26 15:03:01.849261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.233 [2024-04-26 15:03:01.849268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.233 [2024-04-26 15:03:01.849483] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.233 [2024-04-26 15:03:01.849697] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.233 [2024-04-26 15:03:01.849705] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.233 [2024-04-26 15:03:01.849711] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.233 [2024-04-26 15:03:01.853183] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.233 [2024-04-26 15:03:01.862194] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.233 [2024-04-26 15:03:01.862771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.233 [2024-04-26 15:03:01.863092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.233 [2024-04-26 15:03:01.863102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.233 [2024-04-26 15:03:01.863110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.233 [2024-04-26 15:03:01.863324] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.233 [2024-04-26 15:03:01.863538] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.233 [2024-04-26 15:03:01.863547] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.233 [2024-04-26 15:03:01.863554] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.233 [2024-04-26 15:03:01.867028] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.233 [2024-04-26 15:03:01.876033] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.233 [2024-04-26 15:03:01.876599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.233 [2024-04-26 15:03:01.876920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.233 [2024-04-26 15:03:01.876931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.233 [2024-04-26 15:03:01.876939] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.233 [2024-04-26 15:03:01.877153] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.233 [2024-04-26 15:03:01.877366] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.233 [2024-04-26 15:03:01.877374] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.233 [2024-04-26 15:03:01.877381] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.233 [2024-04-26 15:03:01.880853] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.233 [2024-04-26 15:03:01.889861] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.233 [2024-04-26 15:03:01.890281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.233 [2024-04-26 15:03:01.890569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.233 [2024-04-26 15:03:01.890578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.233 [2024-04-26 15:03:01.890586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.233 [2024-04-26 15:03:01.890801] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.233 [2024-04-26 15:03:01.891022] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.233 [2024-04-26 15:03:01.891030] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.233 [2024-04-26 15:03:01.891037] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.496 [2024-04-26 15:03:01.894509] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.496 [2024-04-26 15:03:01.903720] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.496 [2024-04-26 15:03:01.904172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.496 [2024-04-26 15:03:01.904493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.496 [2024-04-26 15:03:01.904504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.496 [2024-04-26 15:03:01.904512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.496 [2024-04-26 15:03:01.904727] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.496 [2024-04-26 15:03:01.904946] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.496 [2024-04-26 15:03:01.904954] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.496 [2024-04-26 15:03:01.904960] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.496 [2024-04-26 15:03:01.908432] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.496 [2024-04-26 15:03:01.917440] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.496 [2024-04-26 15:03:01.918075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.496 [2024-04-26 15:03:01.918451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.496 [2024-04-26 15:03:01.918464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.496 [2024-04-26 15:03:01.918474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.496 [2024-04-26 15:03:01.918708] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.496 [2024-04-26 15:03:01.918933] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.496 [2024-04-26 15:03:01.918942] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.496 [2024-04-26 15:03:01.918949] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.496 [2024-04-26 15:03:01.922426] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.496 [2024-04-26 15:03:01.931229] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.496 [2024-04-26 15:03:01.931781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.496 [2024-04-26 15:03:01.932097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.496 [2024-04-26 15:03:01.932109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.496 [2024-04-26 15:03:01.932121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.496 [2024-04-26 15:03:01.932337] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.496 [2024-04-26 15:03:01.932551] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.496 [2024-04-26 15:03:01.932559] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.496 [2024-04-26 15:03:01.932566] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.496 [2024-04-26 15:03:01.936052] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.496 [2024-04-26 15:03:01.945060] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.496 [2024-04-26 15:03:01.945586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.496 [2024-04-26 15:03:01.945905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.496 [2024-04-26 15:03:01.945915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.496 [2024-04-26 15:03:01.945923] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.496 [2024-04-26 15:03:01.946138] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.496 [2024-04-26 15:03:01.946352] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.496 [2024-04-26 15:03:01.946359] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.496 [2024-04-26 15:03:01.946366] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.496 [2024-04-26 15:03:01.949840] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.496 [2024-04-26 15:03:01.958848] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.496 [2024-04-26 15:03:01.959424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.496 [2024-04-26 15:03:01.959785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.496 [2024-04-26 15:03:01.959794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.496 [2024-04-26 15:03:01.959802] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.496 [2024-04-26 15:03:01.960021] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.496 [2024-04-26 15:03:01.960235] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.496 [2024-04-26 15:03:01.960243] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.496 [2024-04-26 15:03:01.960250] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.496 [2024-04-26 15:03:01.963719] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.496 [2024-04-26 15:03:01.972724] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.496 [2024-04-26 15:03:01.973233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.496 [2024-04-26 15:03:01.973555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.496 [2024-04-26 15:03:01.973565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.496 [2024-04-26 15:03:01.973572] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.496 [2024-04-26 15:03:01.973794] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.496 [2024-04-26 15:03:01.974014] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.496 [2024-04-26 15:03:01.974022] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.496 [2024-04-26 15:03:01.974029] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.496 [2024-04-26 15:03:01.977498] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.496 [2024-04-26 15:03:01.986500] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.496 [2024-04-26 15:03:01.987050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.496 [2024-04-26 15:03:01.987391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.496 [2024-04-26 15:03:01.987405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.497 [2024-04-26 15:03:01.987415] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.497 [2024-04-26 15:03:01.987650] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.497 [2024-04-26 15:03:01.987874] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.497 [2024-04-26 15:03:01.987883] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.497 [2024-04-26 15:03:01.987890] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.497 [2024-04-26 15:03:01.991369] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.497 [2024-04-26 15:03:02.000392] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.497 [2024-04-26 15:03:02.001061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.497 [2024-04-26 15:03:02.001396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.497 [2024-04-26 15:03:02.001409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.497 [2024-04-26 15:03:02.001419] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.497 [2024-04-26 15:03:02.001652] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.497 [2024-04-26 15:03:02.001875] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.497 [2024-04-26 15:03:02.001885] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.497 [2024-04-26 15:03:02.001892] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.497 [2024-04-26 15:03:02.005372] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.497 [2024-04-26 15:03:02.014177] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.497 [2024-04-26 15:03:02.014719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.497 [2024-04-26 15:03:02.015031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.497 [2024-04-26 15:03:02.015042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.497 [2024-04-26 15:03:02.015050] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.497 [2024-04-26 15:03:02.015266] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.497 [2024-04-26 15:03:02.015485] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.497 [2024-04-26 15:03:02.015494] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.497 [2024-04-26 15:03:02.015501] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.497 [2024-04-26 15:03:02.018978] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.497 [2024-04-26 15:03:02.028030] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.497 [2024-04-26 15:03:02.028694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.497 [2024-04-26 15:03:02.029048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.497 [2024-04-26 15:03:02.029062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.497 [2024-04-26 15:03:02.029072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.497 [2024-04-26 15:03:02.029306] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.497 [2024-04-26 15:03:02.029523] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.497 [2024-04-26 15:03:02.029533] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.497 [2024-04-26 15:03:02.029540] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.497 [2024-04-26 15:03:02.033016] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.497 [2024-04-26 15:03:02.041833] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.497 [2024-04-26 15:03:02.042421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.497 [2024-04-26 15:03:02.042722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.497 [2024-04-26 15:03:02.042732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.497 [2024-04-26 15:03:02.042740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.497 [2024-04-26 15:03:02.042960] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.497 [2024-04-26 15:03:02.043175] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.497 [2024-04-26 15:03:02.043182] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.497 [2024-04-26 15:03:02.043189] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.497 [2024-04-26 15:03:02.046658] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.497 [2024-04-26 15:03:02.055672] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.497 [2024-04-26 15:03:02.056343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.497 [2024-04-26 15:03:02.056679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.497 [2024-04-26 15:03:02.056692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.497 [2024-04-26 15:03:02.056701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.497 [2024-04-26 15:03:02.056942] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.497 [2024-04-26 15:03:02.057161] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.497 [2024-04-26 15:03:02.057173] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.497 [2024-04-26 15:03:02.057180] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.497 [2024-04-26 15:03:02.060664] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.497 [2024-04-26 15:03:02.069475] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.497 [2024-04-26 15:03:02.070067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.497 [2024-04-26 15:03:02.070385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.497 [2024-04-26 15:03:02.070394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.497 [2024-04-26 15:03:02.070402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.497 [2024-04-26 15:03:02.070617] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.497 [2024-04-26 15:03:02.070831] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.497 [2024-04-26 15:03:02.070842] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.497 [2024-04-26 15:03:02.070850] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.497 [2024-04-26 15:03:02.074323] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.497 [2024-04-26 15:03:02.083325] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.497 [2024-04-26 15:03:02.083863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.497 [2024-04-26 15:03:02.084172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.497 [2024-04-26 15:03:02.084182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.497 [2024-04-26 15:03:02.084190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.497 [2024-04-26 15:03:02.084405] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.497 [2024-04-26 15:03:02.084619] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.497 [2024-04-26 15:03:02.084627] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.497 [2024-04-26 15:03:02.084634] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.497 [2024-04-26 15:03:02.088107] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.497 [2024-04-26 15:03:02.097110] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.497 [2024-04-26 15:03:02.097688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.497 [2024-04-26 15:03:02.098032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.497 [2024-04-26 15:03:02.098046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.497 [2024-04-26 15:03:02.098056] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.497 [2024-04-26 15:03:02.098289] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.497 [2024-04-26 15:03:02.098507] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.497 [2024-04-26 15:03:02.098516] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.497 [2024-04-26 15:03:02.098527] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.497 [2024-04-26 15:03:02.102007] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.497 [2024-04-26 15:03:02.111023] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.497 [2024-04-26 15:03:02.111556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.497 [2024-04-26 15:03:02.111881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.497 [2024-04-26 15:03:02.111892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.497 [2024-04-26 15:03:02.111900] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.497 [2024-04-26 15:03:02.112116] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.497 [2024-04-26 15:03:02.112330] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.498 [2024-04-26 15:03:02.112338] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.498 [2024-04-26 15:03:02.112345] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.498 [2024-04-26 15:03:02.115821] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.498 [2024-04-26 15:03:02.124846] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.498 [2024-04-26 15:03:02.125463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.498 [2024-04-26 15:03:02.125797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.498 [2024-04-26 15:03:02.125810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.498 [2024-04-26 15:03:02.125819] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.498 [2024-04-26 15:03:02.126061] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.498 [2024-04-26 15:03:02.126280] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.498 [2024-04-26 15:03:02.126288] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.498 [2024-04-26 15:03:02.126295] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.498 [2024-04-26 15:03:02.129780] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.498 [2024-04-26 15:03:02.138605] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.498 [2024-04-26 15:03:02.139131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.498 [2024-04-26 15:03:02.139433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.498 [2024-04-26 15:03:02.139442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.498 [2024-04-26 15:03:02.139450] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.498 [2024-04-26 15:03:02.139665] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.498 [2024-04-26 15:03:02.139886] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.498 [2024-04-26 15:03:02.139895] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.498 [2024-04-26 15:03:02.139902] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.498 [2024-04-26 15:03:02.143382] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.498 [2024-04-26 15:03:02.152399] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.498 [2024-04-26 15:03:02.152911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.498 [2024-04-26 15:03:02.153249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.498 [2024-04-26 15:03:02.153258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.498 [2024-04-26 15:03:02.153266] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.498 [2024-04-26 15:03:02.153481] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.498 [2024-04-26 15:03:02.153695] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.498 [2024-04-26 15:03:02.153703] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.498 [2024-04-26 15:03:02.153710] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.498 [2024-04-26 15:03:02.157187] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.761 [2024-04-26 15:03:02.166206] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.761 [2024-04-26 15:03:02.166743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.761 [2024-04-26 15:03:02.167048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.761 [2024-04-26 15:03:02.167059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.761 [2024-04-26 15:03:02.167067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.761 [2024-04-26 15:03:02.167282] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.761 [2024-04-26 15:03:02.167497] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.761 [2024-04-26 15:03:02.167504] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.761 [2024-04-26 15:03:02.167511] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.761 [2024-04-26 15:03:02.170984] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.761 [2024-04-26 15:03:02.179998] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.761 [2024-04-26 15:03:02.180491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.761 [2024-04-26 15:03:02.180849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.761 [2024-04-26 15:03:02.180863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.761 [2024-04-26 15:03:02.180873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.761 [2024-04-26 15:03:02.181106] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.761 [2024-04-26 15:03:02.181324] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.761 [2024-04-26 15:03:02.181332] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.761 [2024-04-26 15:03:02.181339] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.761 [2024-04-26 15:03:02.184816] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1222657 Killed "${NVMF_APP[@]}" "$@" 00:26:19.761 15:03:02 -- host/bdevperf.sh@36 -- # tgt_init 00:26:19.761 15:03:02 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:19.761 15:03:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:19.761 15:03:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:19.761 15:03:02 -- common/autotest_common.sh@10 -- # set +x 00:26:19.761 [2024-04-26 15:03:02.193846] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.761 [2024-04-26 15:03:02.194387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.761 [2024-04-26 15:03:02.194605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.761 [2024-04-26 15:03:02.194617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.761 [2024-04-26 15:03:02.194625] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.761 15:03:02 -- nvmf/common.sh@470 -- # nvmfpid=1224466 00:26:19.761 [2024-04-26 15:03:02.194847] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.761 [2024-04-26 15:03:02.195064] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.761 [2024-04-26 15:03:02.195073] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.761 [2024-04-26 15:03:02.195080] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.761 15:03:02 -- nvmf/common.sh@471 -- # waitforlisten 1224466 00:26:19.761 15:03:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:19.761 15:03:02 -- common/autotest_common.sh@817 -- # '[' -z 1224466 ']' 00:26:19.761 15:03:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.761 15:03:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:19.761 15:03:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.761 15:03:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:19.761 15:03:02 -- common/autotest_common.sh@10 -- # set +x 00:26:19.761 [2024-04-26 15:03:02.198555] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.761 [2024-04-26 15:03:02.207569] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.761 [2024-04-26 15:03:02.208061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.761 [2024-04-26 15:03:02.208394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.761 [2024-04-26 15:03:02.208407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.761 [2024-04-26 15:03:02.208417] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.761 [2024-04-26 15:03:02.208651] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.761 [2024-04-26 15:03:02.208876] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.761 [2024-04-26 15:03:02.208886] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.761 [2024-04-26 15:03:02.208893] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.761 [2024-04-26 15:03:02.212371] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.761 [2024-04-26 15:03:02.221392] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.761 [2024-04-26 15:03:02.221911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.761 [2024-04-26 15:03:02.222142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.761 [2024-04-26 15:03:02.222153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.761 [2024-04-26 15:03:02.222160] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.761 [2024-04-26 15:03:02.222375] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.761 [2024-04-26 15:03:02.222590] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.762 [2024-04-26 15:03:02.222598] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.762 [2024-04-26 15:03:02.222604] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.762 [2024-04-26 15:03:02.226086] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.762 [2024-04-26 15:03:02.235323] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.762 [2024-04-26 15:03:02.235959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.762 [2024-04-26 15:03:02.236321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.762 [2024-04-26 15:03:02.236335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.762 [2024-04-26 15:03:02.236345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.762 [2024-04-26 15:03:02.236578] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.762 [2024-04-26 15:03:02.236797] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.762 [2024-04-26 15:03:02.236806] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.762 [2024-04-26 15:03:02.236813] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.762 [2024-04-26 15:03:02.240290] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.762 [2024-04-26 15:03:02.244933] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:19.762 [2024-04-26 15:03:02.244979] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:19.762 [2024-04-26 15:03:02.249093] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.762 [2024-04-26 15:03:02.249606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.762 [2024-04-26 15:03:02.249965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.762 [2024-04-26 15:03:02.249976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.762 [2024-04-26 15:03:02.249984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.762 [2024-04-26 15:03:02.250199] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.762 [2024-04-26 15:03:02.250413] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.762 [2024-04-26 15:03:02.250422] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.762 [2024-04-26 15:03:02.250429] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.762 [2024-04-26 15:03:02.253903] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.762 [2024-04-26 15:03:02.262926] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.762 [2024-04-26 15:03:02.263570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.762 [2024-04-26 15:03:02.263822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.762 [2024-04-26 15:03:02.263834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.762 [2024-04-26 15:03:02.263852] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.762 [2024-04-26 15:03:02.264086] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.762 [2024-04-26 15:03:02.264304] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.762 [2024-04-26 15:03:02.264313] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.762 [2024-04-26 15:03:02.264321] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.762 [2024-04-26 15:03:02.267792] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.762 [2024-04-26 15:03:02.276803] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.762 [2024-04-26 15:03:02.277414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.762 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.762 [2024-04-26 15:03:02.277747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.762 [2024-04-26 15:03:02.277760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.762 [2024-04-26 15:03:02.277770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.762 [2024-04-26 15:03:02.278011] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.762 [2024-04-26 15:03:02.278230] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.762 [2024-04-26 15:03:02.278238] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.762 [2024-04-26 15:03:02.278245] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.762 [2024-04-26 15:03:02.281722] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.762 [2024-04-26 15:03:02.290529] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.762 [2024-04-26 15:03:02.291222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.762 [2024-04-26 15:03:02.291551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.762 [2024-04-26 15:03:02.291564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.762 [2024-04-26 15:03:02.291573] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.762 [2024-04-26 15:03:02.291807] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.762 [2024-04-26 15:03:02.292033] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.762 [2024-04-26 15:03:02.292043] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.762 [2024-04-26 15:03:02.292051] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.762 [2024-04-26 15:03:02.295526] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.762 [2024-04-26 15:03:02.304326] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.762 [2024-04-26 15:03:02.304949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.762 [2024-04-26 15:03:02.305279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.762 [2024-04-26 15:03:02.305292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.762 [2024-04-26 15:03:02.305302] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.762 [2024-04-26 15:03:02.305536] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.762 [2024-04-26 15:03:02.305754] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.762 [2024-04-26 15:03:02.305762] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.762 [2024-04-26 15:03:02.305770] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.762 [2024-04-26 15:03:02.309253] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.762 [2024-04-26 15:03:02.318065] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.762 [2024-04-26 15:03:02.318692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.762 [2024-04-26 15:03:02.319109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.762 [2024-04-26 15:03:02.319124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.762 [2024-04-26 15:03:02.319134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.762 [2024-04-26 15:03:02.319368] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.762 [2024-04-26 15:03:02.319586] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.762 [2024-04-26 15:03:02.319594] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.762 [2024-04-26 15:03:02.319602] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.762 [2024-04-26 15:03:02.323079] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.762 [2024-04-26 15:03:02.329696] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:19.762 [2024-04-26 15:03:02.331886] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.762 [2024-04-26 15:03:02.332512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.762 [2024-04-26 15:03:02.332848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.762 [2024-04-26 15:03:02.332862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.762 [2024-04-26 15:03:02.332872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.762 [2024-04-26 15:03:02.333106] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.762 [2024-04-26 15:03:02.333324] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.762 [2024-04-26 15:03:02.333332] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.762 [2024-04-26 15:03:02.333340] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.762 [2024-04-26 15:03:02.336841] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.762 [2024-04-26 15:03:02.345734] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.762 [2024-04-26 15:03:02.346341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.762 [2024-04-26 15:03:02.346673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.762 [2024-04-26 15:03:02.346686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.762 [2024-04-26 15:03:02.346695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.762 [2024-04-26 15:03:02.346938] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.762 [2024-04-26 15:03:02.347156] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.763 [2024-04-26 15:03:02.347165] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.763 [2024-04-26 15:03:02.347172] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.763 [2024-04-26 15:03:02.350642] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.763 [2024-04-26 15:03:02.359655] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.763 [2024-04-26 15:03:02.360226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.763 [2024-04-26 15:03:02.360531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.763 [2024-04-26 15:03:02.360541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.763 [2024-04-26 15:03:02.360549] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.763 [2024-04-26 15:03:02.360765] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.763 [2024-04-26 15:03:02.360985] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.763 [2024-04-26 15:03:02.361001] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.763 [2024-04-26 15:03:02.361009] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.763 [2024-04-26 15:03:02.364485] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.763 [2024-04-26 15:03:02.373499] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.763 [2024-04-26 15:03:02.373974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.763 [2024-04-26 15:03:02.374320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.763 [2024-04-26 15:03:02.374333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.763 [2024-04-26 15:03:02.374343] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.763 [2024-04-26 15:03:02.374578] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.763 [2024-04-26 15:03:02.374795] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.763 [2024-04-26 15:03:02.374804] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.763 [2024-04-26 15:03:02.374812] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.763 [2024-04-26 15:03:02.378296] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.763 [2024-04-26 15:03:02.381822] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:19.763 [2024-04-26 15:03:02.381851] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:19.763 [2024-04-26 15:03:02.381859] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:19.763 [2024-04-26 15:03:02.381864] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:19.763 [2024-04-26 15:03:02.381868] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:19.763 [2024-04-26 15:03:02.382074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:19.763 [2024-04-26 15:03:02.382285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.763 [2024-04-26 15:03:02.382287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:19.763 [2024-04-26 15:03:02.387302] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.763 [2024-04-26 15:03:02.387941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.763 [2024-04-26 15:03:02.388286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.763 [2024-04-26 15:03:02.388298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.763 [2024-04-26 15:03:02.388308] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.763 [2024-04-26 15:03:02.388543] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.763 [2024-04-26 15:03:02.388761] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.763 [2024-04-26 15:03:02.388769] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.763 [2024-04-26 15:03:02.388777] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.763 [2024-04-26 15:03:02.392256] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.763 [2024-04-26 15:03:02.401060] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.763 [2024-04-26 15:03:02.401668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.763 [2024-04-26 15:03:02.402032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.763 [2024-04-26 15:03:02.402047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.763 [2024-04-26 15:03:02.402057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.763 [2024-04-26 15:03:02.402293] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.763 [2024-04-26 15:03:02.402511] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.763 [2024-04-26 15:03:02.402519] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.763 [2024-04-26 15:03:02.402527] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.763 [2024-04-26 15:03:02.406004] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.763 [2024-04-26 15:03:02.414803] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.763 [2024-04-26 15:03:02.415364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.763 [2024-04-26 15:03:02.415705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.763 [2024-04-26 15:03:02.415718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:19.763 [2024-04-26 15:03:02.415729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:19.763 [2024-04-26 15:03:02.415971] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:19.763 [2024-04-26 15:03:02.416194] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.763 [2024-04-26 15:03:02.416203] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.763 [2024-04-26 15:03:02.416211] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.763 [2024-04-26 15:03:02.419685] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.027 [2024-04-26 15:03:02.428689] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.027 [2024-04-26 15:03:02.429269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.027 [2024-04-26 15:03:02.429608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.027 [2024-04-26 15:03:02.429621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.027 [2024-04-26 15:03:02.429631] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.027 [2024-04-26 15:03:02.429872] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.027 [2024-04-26 15:03:02.430090] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.027 [2024-04-26 15:03:02.430099] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.027 [2024-04-26 15:03:02.430107] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.027 [2024-04-26 15:03:02.433581] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.027 [2024-04-26 15:03:02.442600] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.027 [2024-04-26 15:03:02.443231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.027 [2024-04-26 15:03:02.443566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.027 [2024-04-26 15:03:02.443579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.027 [2024-04-26 15:03:02.443589] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.027 [2024-04-26 15:03:02.443823] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.027 [2024-04-26 15:03:02.444048] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.027 [2024-04-26 15:03:02.444057] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.027 [2024-04-26 15:03:02.444064] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.027 [2024-04-26 15:03:02.447536] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.027 [2024-04-26 15:03:02.456335] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.027 [2024-04-26 15:03:02.456892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.027 [2024-04-26 15:03:02.457261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.027 [2024-04-26 15:03:02.457271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.027 [2024-04-26 15:03:02.457279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.027 [2024-04-26 15:03:02.457499] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.027 [2024-04-26 15:03:02.457714] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.027 [2024-04-26 15:03:02.457722] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.027 [2024-04-26 15:03:02.457734] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.027 [2024-04-26 15:03:02.461213] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.027 [2024-04-26 15:03:02.470220] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.027 [2024-04-26 15:03:02.470766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.027 [2024-04-26 15:03:02.471098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.027 [2024-04-26 15:03:02.471109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.027 [2024-04-26 15:03:02.471117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.027 [2024-04-26 15:03:02.471333] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.027 [2024-04-26 15:03:02.471547] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.027 [2024-04-26 15:03:02.471555] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.027 [2024-04-26 15:03:02.471562] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.027 [2024-04-26 15:03:02.475030] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.027 [2024-04-26 15:03:02.484032] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.027 [2024-04-26 15:03:02.484467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.027 [2024-04-26 15:03:02.484756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.027 [2024-04-26 15:03:02.484766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.027 [2024-04-26 15:03:02.484773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.027 [2024-04-26 15:03:02.484993] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.027 [2024-04-26 15:03:02.485208] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.027 [2024-04-26 15:03:02.485216] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.027 [2024-04-26 15:03:02.485223] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.027 [2024-04-26 15:03:02.488690] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.027 [2024-04-26 15:03:02.497894] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.027 [2024-04-26 15:03:02.498426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.027 [2024-04-26 15:03:02.498768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.028 [2024-04-26 15:03:02.498778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.028 [2024-04-26 15:03:02.498786] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.028 [2024-04-26 15:03:02.499005] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.028 [2024-04-26 15:03:02.499220] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.028 [2024-04-26 15:03:02.499228] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.028 [2024-04-26 15:03:02.499239] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.028 [2024-04-26 15:03:02.502707] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.028 [2024-04-26 15:03:02.511707] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.028 [2024-04-26 15:03:02.512219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.028 [2024-04-26 15:03:02.512581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.028 [2024-04-26 15:03:02.512593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.028 [2024-04-26 15:03:02.512603] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.028 [2024-04-26 15:03:02.512844] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.028 [2024-04-26 15:03:02.513068] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.028 [2024-04-26 15:03:02.513077] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.028 [2024-04-26 15:03:02.513085] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.028 [2024-04-26 15:03:02.516557] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.028 [2024-04-26 15:03:02.525559] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.028 [2024-04-26 15:03:02.526086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.028 [2024-04-26 15:03:02.526324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.028 [2024-04-26 15:03:02.526335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.028 [2024-04-26 15:03:02.526343] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.028 [2024-04-26 15:03:02.526559] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.028 [2024-04-26 15:03:02.526773] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.028 [2024-04-26 15:03:02.526781] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.028 [2024-04-26 15:03:02.526788] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.028 [2024-04-26 15:03:02.530262] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.028 [2024-04-26 15:03:02.539283] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.028 [2024-04-26 15:03:02.539939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.028 [2024-04-26 15:03:02.540181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.028 [2024-04-26 15:03:02.540193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.028 [2024-04-26 15:03:02.540203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.028 [2024-04-26 15:03:02.540437] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.028 [2024-04-26 15:03:02.540655] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.028 [2024-04-26 15:03:02.540663] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.028 [2024-04-26 15:03:02.540670] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.028 [2024-04-26 15:03:02.544155] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.028 [2024-04-26 15:03:02.553167] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.028 [2024-04-26 15:03:02.553800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.028 [2024-04-26 15:03:02.554183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.028 [2024-04-26 15:03:02.554197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.028 [2024-04-26 15:03:02.554206] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.028 [2024-04-26 15:03:02.554441] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.028 [2024-04-26 15:03:02.554658] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.028 [2024-04-26 15:03:02.554666] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.028 [2024-04-26 15:03:02.554674] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.028 [2024-04-26 15:03:02.558154] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.028 [2024-04-26 15:03:02.566959] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.028 [2024-04-26 15:03:02.567449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.028 [2024-04-26 15:03:02.567811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.028 [2024-04-26 15:03:02.567824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.028 [2024-04-26 15:03:02.567833] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.028 [2024-04-26 15:03:02.568075] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.028 [2024-04-26 15:03:02.568293] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.028 [2024-04-26 15:03:02.568301] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.028 [2024-04-26 15:03:02.568309] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.028 [2024-04-26 15:03:02.571784] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.028 [2024-04-26 15:03:02.580789] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.028 [2024-04-26 15:03:02.581284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.028 [2024-04-26 15:03:02.581648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.028 [2024-04-26 15:03:02.581661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.028 [2024-04-26 15:03:02.581671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.028 [2024-04-26 15:03:02.581911] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.028 [2024-04-26 15:03:02.582129] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.028 [2024-04-26 15:03:02.582138] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.028 [2024-04-26 15:03:02.582146] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.028 [2024-04-26 15:03:02.585617] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.028 [2024-04-26 15:03:02.594621] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.028 [2024-04-26 15:03:02.595196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.028 [2024-04-26 15:03:02.595544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.028 [2024-04-26 15:03:02.595557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.028 [2024-04-26 15:03:02.595566] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.028 [2024-04-26 15:03:02.595800] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.028 [2024-04-26 15:03:02.596025] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.028 [2024-04-26 15:03:02.596035] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.028 [2024-04-26 15:03:02.596043] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.028 [2024-04-26 15:03:02.599517] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.028 [2024-04-26 15:03:02.608363] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.028 [2024-04-26 15:03:02.608941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.028 [2024-04-26 15:03:02.609303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.028 [2024-04-26 15:03:02.609316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.028 [2024-04-26 15:03:02.609326] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.028 [2024-04-26 15:03:02.609560] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.029 [2024-04-26 15:03:02.609777] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.029 [2024-04-26 15:03:02.609786] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.029 [2024-04-26 15:03:02.609793] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.029 [2024-04-26 15:03:02.613270] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.029 [2024-04-26 15:03:02.622275] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.029 [2024-04-26 15:03:02.622808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.029 [2024-04-26 15:03:02.623227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.029 [2024-04-26 15:03:02.623240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.029 [2024-04-26 15:03:02.623250] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.029 [2024-04-26 15:03:02.623484] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.029 [2024-04-26 15:03:02.623701] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.029 [2024-04-26 15:03:02.623709] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.029 [2024-04-26 15:03:02.623716] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.029 [2024-04-26 15:03:02.627194] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.029 [2024-04-26 15:03:02.636001] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.029 [2024-04-26 15:03:02.636486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.029 [2024-04-26 15:03:02.636861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.029 [2024-04-26 15:03:02.636876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.029 [2024-04-26 15:03:02.636885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.029 [2024-04-26 15:03:02.637119] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.029 [2024-04-26 15:03:02.637337] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.029 [2024-04-26 15:03:02.637345] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.029 [2024-04-26 15:03:02.637352] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.029 [2024-04-26 15:03:02.640825] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.029 [2024-04-26 15:03:02.649834] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.029 [2024-04-26 15:03:02.650472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.029 [2024-04-26 15:03:02.650807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.029 [2024-04-26 15:03:02.650819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.029 [2024-04-26 15:03:02.650829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.029 [2024-04-26 15:03:02.651070] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.029 [2024-04-26 15:03:02.651288] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.029 [2024-04-26 15:03:02.651303] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.029 [2024-04-26 15:03:02.651311] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.029 [2024-04-26 15:03:02.654784] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.029 [2024-04-26 15:03:02.663591] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.029 [2024-04-26 15:03:02.664121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.029 [2024-04-26 15:03:02.664462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.029 [2024-04-26 15:03:02.664474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.029 [2024-04-26 15:03:02.664484] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.029 [2024-04-26 15:03:02.664718] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.029 [2024-04-26 15:03:02.664942] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.029 [2024-04-26 15:03:02.664951] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.029 [2024-04-26 15:03:02.664958] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.029 [2024-04-26 15:03:02.668430] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.029 [2024-04-26 15:03:02.677431] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.029 [2024-04-26 15:03:02.677961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.029 [2024-04-26 15:03:02.678090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.029 [2024-04-26 15:03:02.678103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.029 [2024-04-26 15:03:02.678116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.029 [2024-04-26 15:03:02.678350] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.029 [2024-04-26 15:03:02.678568] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.029 [2024-04-26 15:03:02.678576] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.029 [2024-04-26 15:03:02.678583] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.029 [2024-04-26 15:03:02.682062] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.293 [2024-04-26 15:03:02.691273] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.293 [2024-04-26 15:03:02.691925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.293 [2024-04-26 15:03:02.692331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.293 [2024-04-26 15:03:02.692344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.293 [2024-04-26 15:03:02.692354] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.293 [2024-04-26 15:03:02.692587] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.293 [2024-04-26 15:03:02.692804] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.293 [2024-04-26 15:03:02.692813] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.293 [2024-04-26 15:03:02.692820] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.293 [2024-04-26 15:03:02.696297] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.293 [2024-04-26 15:03:02.705102] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.293 [2024-04-26 15:03:02.705542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.293 [2024-04-26 15:03:02.705848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.293 [2024-04-26 15:03:02.705859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.293 [2024-04-26 15:03:02.705866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.293 [2024-04-26 15:03:02.706082] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.293 [2024-04-26 15:03:02.706296] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.293 [2024-04-26 15:03:02.706304] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.293 [2024-04-26 15:03:02.706311] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.293 [2024-04-26 15:03:02.709780] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.293 [2024-04-26 15:03:02.718988] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.293 [2024-04-26 15:03:02.719626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.293 [2024-04-26 15:03:02.719971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.293 [2024-04-26 15:03:02.719986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.293 [2024-04-26 15:03:02.719995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.293 [2024-04-26 15:03:02.720233] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.293 [2024-04-26 15:03:02.720450] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.293 [2024-04-26 15:03:02.720460] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.293 [2024-04-26 15:03:02.720467] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.293 [2024-04-26 15:03:02.723946] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.293 [2024-04-26 15:03:02.732776] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.293 [2024-04-26 15:03:02.733424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.293 [2024-04-26 15:03:02.733640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.293 [2024-04-26 15:03:02.733653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.293 [2024-04-26 15:03:02.733662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.293 [2024-04-26 15:03:02.733911] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.293 [2024-04-26 15:03:02.734129] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.293 [2024-04-26 15:03:02.734138] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.293 [2024-04-26 15:03:02.734145] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.293 [2024-04-26 15:03:02.737616] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.293 [2024-04-26 15:03:02.746624] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.293 [2024-04-26 15:03:02.747230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.293 [2024-04-26 15:03:02.747596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.293 [2024-04-26 15:03:02.747608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.293 [2024-04-26 15:03:02.747616] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.293 [2024-04-26 15:03:02.747831] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.293 [2024-04-26 15:03:02.748051] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.293 [2024-04-26 15:03:02.748060] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.293 [2024-04-26 15:03:02.748067] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.294 [2024-04-26 15:03:02.751538] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.294 [2024-04-26 15:03:02.760336] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.294 [2024-04-26 15:03:02.760710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.294 [2024-04-26 15:03:02.760765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.294 [2024-04-26 15:03:02.760774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.294 [2024-04-26 15:03:02.760781] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.294 [2024-04-26 15:03:02.761006] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.294 [2024-04-26 15:03:02.761220] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.294 [2024-04-26 15:03:02.761228] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.294 [2024-04-26 15:03:02.761235] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.294 [2024-04-26 15:03:02.764702] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.294 [2024-04-26 15:03:02.774112] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.294 [2024-04-26 15:03:02.774496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.294 [2024-04-26 15:03:02.774825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.294 [2024-04-26 15:03:02.774836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.294 [2024-04-26 15:03:02.774849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.294 [2024-04-26 15:03:02.775064] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.294 [2024-04-26 15:03:02.775278] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.294 [2024-04-26 15:03:02.775285] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.294 [2024-04-26 15:03:02.775292] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.294 [2024-04-26 15:03:02.778759] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.294 [2024-04-26 15:03:02.787959] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.294 [2024-04-26 15:03:02.788496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.294 [2024-04-26 15:03:02.788789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.294 [2024-04-26 15:03:02.788799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.294 [2024-04-26 15:03:02.788807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.294 [2024-04-26 15:03:02.789026] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.294 [2024-04-26 15:03:02.789241] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.294 [2024-04-26 15:03:02.789248] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.294 [2024-04-26 15:03:02.789255] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.294 [2024-04-26 15:03:02.792785] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.294 [2024-04-26 15:03:02.801788] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.294 [2024-04-26 15:03:02.802327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.294 [2024-04-26 15:03:02.802513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.294 [2024-04-26 15:03:02.802523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.294 [2024-04-26 15:03:02.802531] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.294 [2024-04-26 15:03:02.802745] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.294 [2024-04-26 15:03:02.802968] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.294 [2024-04-26 15:03:02.802976] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.294 [2024-04-26 15:03:02.802983] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.294 [2024-04-26 15:03:02.806449] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.294 [2024-04-26 15:03:02.815648] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.294 [2024-04-26 15:03:02.816319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.294 [2024-04-26 15:03:02.816559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.294 [2024-04-26 15:03:02.816572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.294 [2024-04-26 15:03:02.816582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.294 [2024-04-26 15:03:02.816815] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.294 [2024-04-26 15:03:02.817040] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.294 [2024-04-26 15:03:02.817048] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.294 [2024-04-26 15:03:02.817056] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.294 [2024-04-26 15:03:02.820528] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.294 [2024-04-26 15:03:02.829530] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.294 [2024-04-26 15:03:02.830167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.294 [2024-04-26 15:03:02.830460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.294 [2024-04-26 15:03:02.830475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.294 [2024-04-26 15:03:02.830485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.294 [2024-04-26 15:03:02.830719] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.294 [2024-04-26 15:03:02.830943] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.294 [2024-04-26 15:03:02.830952] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.294 [2024-04-26 15:03:02.830959] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.294 [2024-04-26 15:03:02.834442] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.294 [2024-04-26 15:03:02.843245] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.294 [2024-04-26 15:03:02.843858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.294 [2024-04-26 15:03:02.844231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.294 [2024-04-26 15:03:02.844243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.294 [2024-04-26 15:03:02.844253] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.294 [2024-04-26 15:03:02.844486] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.294 [2024-04-26 15:03:02.844704] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.294 [2024-04-26 15:03:02.844717] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.294 [2024-04-26 15:03:02.844724] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.294 [2024-04-26 15:03:02.848203] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.294 [2024-04-26 15:03:02.857096] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.294 [2024-04-26 15:03:02.857743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.294 [2024-04-26 15:03:02.858053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.294 [2024-04-26 15:03:02.858069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.294 [2024-04-26 15:03:02.858079] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.294 [2024-04-26 15:03:02.858313] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.294 [2024-04-26 15:03:02.858536] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.294 [2024-04-26 15:03:02.858546] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.295 [2024-04-26 15:03:02.858553] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.295 [2024-04-26 15:03:02.862031] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.295 [2024-04-26 15:03:02.870831] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.295 [2024-04-26 15:03:02.871449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.295 [2024-04-26 15:03:02.871659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.295 [2024-04-26 15:03:02.871672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.295 [2024-04-26 15:03:02.871682] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.295 [2024-04-26 15:03:02.871923] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.295 [2024-04-26 15:03:02.872143] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.295 [2024-04-26 15:03:02.872152] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.295 [2024-04-26 15:03:02.872159] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.295 [2024-04-26 15:03:02.875636] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.295 [2024-04-26 15:03:02.884870] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.295 [2024-04-26 15:03:02.885521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.295 [2024-04-26 15:03:02.885769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.295 [2024-04-26 15:03:02.885782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.295 [2024-04-26 15:03:02.885791] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.295 [2024-04-26 15:03:02.886033] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.295 [2024-04-26 15:03:02.886252] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.295 [2024-04-26 15:03:02.886260] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.295 [2024-04-26 15:03:02.886271] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.295 [2024-04-26 15:03:02.889746] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.295 [2024-04-26 15:03:02.898760] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.295 [2024-04-26 15:03:02.899370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.295 [2024-04-26 15:03:02.899712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.295 [2024-04-26 15:03:02.899725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.295 [2024-04-26 15:03:02.899734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.295 [2024-04-26 15:03:02.899975] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.295 [2024-04-26 15:03:02.900193] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.295 [2024-04-26 15:03:02.900201] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.295 [2024-04-26 15:03:02.900208] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.295 [2024-04-26 15:03:02.903681] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.295 [2024-04-26 15:03:02.912482] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.295 [2024-04-26 15:03:02.913008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.295 [2024-04-26 15:03:02.913334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.295 [2024-04-26 15:03:02.913344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.295 [2024-04-26 15:03:02.913352] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.295 [2024-04-26 15:03:02.913567] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.295 [2024-04-26 15:03:02.913782] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.295 [2024-04-26 15:03:02.913790] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.295 [2024-04-26 15:03:02.913797] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.295 [2024-04-26 15:03:02.917272] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.295 [2024-04-26 15:03:02.926271] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.295 [2024-04-26 15:03:02.926900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.295 [2024-04-26 15:03:02.927241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.295 [2024-04-26 15:03:02.927254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.295 [2024-04-26 15:03:02.927264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.295 [2024-04-26 15:03:02.927498] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.295 [2024-04-26 15:03:02.927715] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.295 [2024-04-26 15:03:02.927724] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.295 [2024-04-26 15:03:02.927731] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.295 [2024-04-26 15:03:02.931210] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.295 [2024-04-26 15:03:02.940027] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.295 [2024-04-26 15:03:02.940531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.295 [2024-04-26 15:03:02.940870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.295 [2024-04-26 15:03:02.940881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.295 [2024-04-26 15:03:02.940889] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.295 [2024-04-26 15:03:02.941106] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.295 [2024-04-26 15:03:02.941321] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.295 [2024-04-26 15:03:02.941335] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.295 [2024-04-26 15:03:02.941343] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.295 [2024-04-26 15:03:02.944814] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.295 [2024-04-26 15:03:02.953818] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.295 [2024-04-26 15:03:02.954456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.295 [2024-04-26 15:03:02.954677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.295 [2024-04-26 15:03:02.954692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.295 [2024-04-26 15:03:02.954702] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.295 [2024-04-26 15:03:02.954942] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.295 [2024-04-26 15:03:02.955161] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.295 [2024-04-26 15:03:02.955170] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.295 [2024-04-26 15:03:02.955177] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.557 [2024-04-26 15:03:02.958654] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.557 [2024-04-26 15:03:02.967669] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.557 [2024-04-26 15:03:02.968297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.557 [2024-04-26 15:03:02.968648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.557 [2024-04-26 15:03:02.968661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.558 [2024-04-26 15:03:02.968671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.558 [2024-04-26 15:03:02.968913] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.558 [2024-04-26 15:03:02.969131] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.558 [2024-04-26 15:03:02.969140] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.558 [2024-04-26 15:03:02.969148] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.558 [2024-04-26 15:03:02.972622] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.558 [2024-04-26 15:03:02.981427] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.558 [2024-04-26 15:03:02.982089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.558 [2024-04-26 15:03:02.982434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.558 [2024-04-26 15:03:02.982447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.558 [2024-04-26 15:03:02.982457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.558 [2024-04-26 15:03:02.982691] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.558 [2024-04-26 15:03:02.982915] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.558 [2024-04-26 15:03:02.982925] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.558 [2024-04-26 15:03:02.982932] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.558 [2024-04-26 15:03:02.986406] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.558 [2024-04-26 15:03:02.995211] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.558 [2024-04-26 15:03:02.995765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.558 [2024-04-26 15:03:02.996018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.558 [2024-04-26 15:03:02.996030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.558 [2024-04-26 15:03:02.996038] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.558 [2024-04-26 15:03:02.996253] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.558 [2024-04-26 15:03:02.996468] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.558 [2024-04-26 15:03:02.996475] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.558 [2024-04-26 15:03:02.996482] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.558 [2024-04-26 15:03:02.999957] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.558 [2024-04-26 15:03:03.008956] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.558 [2024-04-26 15:03:03.009573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.558 [2024-04-26 15:03:03.009915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.558 [2024-04-26 15:03:03.009929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.558 [2024-04-26 15:03:03.009939] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.558 [2024-04-26 15:03:03.010172] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.558 [2024-04-26 15:03:03.010390] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.558 [2024-04-26 15:03:03.010398] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.558 [2024-04-26 15:03:03.010405] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.558 [2024-04-26 15:03:03.013882] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.558 15:03:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:20.558 15:03:03 -- common/autotest_common.sh@850 -- # return 0 00:26:20.558 15:03:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:20.558 15:03:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:20.558 15:03:03 -- common/autotest_common.sh@10 -- # set +x 00:26:20.558 [2024-04-26 15:03:03.022681] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.558 [2024-04-26 15:03:03.023214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.558 [2024-04-26 15:03:03.023458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.558 [2024-04-26 15:03:03.023470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.558 [2024-04-26 15:03:03.023481] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.558 [2024-04-26 15:03:03.023716] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.558 [2024-04-26 15:03:03.023941] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.558 [2024-04-26 15:03:03.023950] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.558 [2024-04-26 15:03:03.023957] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.558 [2024-04-26 15:03:03.027432] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.558 [2024-04-26 15:03:03.036460] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.558 [2024-04-26 15:03:03.037084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.558 [2024-04-26 15:03:03.037490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.558 [2024-04-26 15:03:03.037503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.558 [2024-04-26 15:03:03.037513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.558 [2024-04-26 15:03:03.037747] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.558 [2024-04-26 15:03:03.037973] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.558 [2024-04-26 15:03:03.037982] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.558 [2024-04-26 15:03:03.037990] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.558 [2024-04-26 15:03:03.041465] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.558 [2024-04-26 15:03:03.050272] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.558 [2024-04-26 15:03:03.050789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.558 [2024-04-26 15:03:03.051162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.558 [2024-04-26 15:03:03.051176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.558 [2024-04-26 15:03:03.051186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.558 [2024-04-26 15:03:03.051420] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.558 [2024-04-26 15:03:03.051638] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.558 [2024-04-26 15:03:03.051647] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.558 [2024-04-26 15:03:03.051654] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.558 15:03:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:20.558 [2024-04-26 15:03:03.055133] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.558 15:03:03 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:20.558 15:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.558 15:03:03 -- common/autotest_common.sh@10 -- # set +x 00:26:20.558 [2024-04-26 15:03:03.060637] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.558 [2024-04-26 15:03:03.064159] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.558 [2024-04-26 15:03:03.064670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.558 [2024-04-26 15:03:03.064860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.558 [2024-04-26 15:03:03.064871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.558 [2024-04-26 15:03:03.064879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.559 [2024-04-26 15:03:03.065093] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.559 [2024-04-26 15:03:03.065308] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.559 [2024-04-26 15:03:03.065316] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.559 [2024-04-26 15:03:03.065323] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.559 15:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.559 15:03:03 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:20.559 15:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.559 15:03:03 -- common/autotest_common.sh@10 -- # set +x 00:26:20.559 [2024-04-26 15:03:03.068794] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.559 [2024-04-26 15:03:03.078008] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.559 [2024-04-26 15:03:03.078447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.559 [2024-04-26 15:03:03.078699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.559 [2024-04-26 15:03:03.078712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.559 [2024-04-26 15:03:03.078722] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.559 [2024-04-26 15:03:03.078963] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.559 [2024-04-26 15:03:03.079182] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.559 [2024-04-26 15:03:03.079190] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.559 [2024-04-26 15:03:03.079197] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.559 [2024-04-26 15:03:03.082669] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.559 [2024-04-26 15:03:03.091885] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.559 [2024-04-26 15:03:03.092569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.559 Malloc0 00:26:20.559 [2024-04-26 15:03:03.092979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.559 [2024-04-26 15:03:03.092994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.559 [2024-04-26 15:03:03.093003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.559 [2024-04-26 15:03:03.093238] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.559 15:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.559 [2024-04-26 15:03:03.093456] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.559 [2024-04-26 15:03:03.093465] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.559 [2024-04-26 15:03:03.093473] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.559 15:03:03 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:20.559 15:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.559 15:03:03 -- common/autotest_common.sh@10 -- # set +x 00:26:20.559 [2024-04-26 15:03:03.096950] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.559 15:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.559 15:03:03 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:20.559 [2024-04-26 15:03:03.105752] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.559 15:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.559 15:03:03 -- common/autotest_common.sh@10 -- # set +x 00:26:20.559 [2024-04-26 15:03:03.106377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.559 [2024-04-26 15:03:03.106744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.559 [2024-04-26 15:03:03.106757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.559 [2024-04-26 15:03:03.106766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.559 [2024-04-26 15:03:03.107008] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.559 [2024-04-26 15:03:03.107226] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.559 [2024-04-26 15:03:03.107235] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.559 [2024-04-26 15:03:03.107242] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.559 [2024-04-26 15:03:03.110711] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.559 15:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.559 15:03:03 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:20.559 15:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.559 15:03:03 -- common/autotest_common.sh@10 -- # set +x 00:26:20.559 [2024-04-26 15:03:03.119521] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.559 [2024-04-26 15:03:03.120170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.559 [2024-04-26 15:03:03.120509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.559 [2024-04-26 15:03:03.120522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e8620 with addr=10.0.0.2, port=4420 00:26:20.559 [2024-04-26 15:03:03.120532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8620 is same with the state(5) to be set 00:26:20.559 [2024-04-26 15:03:03.120766] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8620 (9): Bad file descriptor 00:26:20.559 [2024-04-26 15:03:03.120991] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.559 [2024-04-26 15:03:03.121001] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.559 [2024-04-26 15:03:03.121008] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.559 [2024-04-26 15:03:03.124393] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.559 [2024-04-26 15:03:03.124482] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.559 15:03:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.559 15:03:03 -- host/bdevperf.sh@38 -- # wait 1223042 00:26:20.559 [2024-04-26 15:03:03.133292] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.559 [2024-04-26 15:03:03.174856] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:30.561 00:26:30.561 Latency(us) 00:26:30.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.561 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:30.561 Verification LBA range: start 0x0 length 0x4000 00:26:30.561 Nvme1n1 : 15.01 8068.10 31.52 9732.31 0.00 7165.95 795.31 16493.23 00:26:30.561 =================================================================================================================== 00:26:30.561 Total : 8068.10 31.52 9732.31 0.00 7165.95 795.31 16493.23 00:26:30.561 15:03:11 -- host/bdevperf.sh@39 -- # sync 00:26:30.561 15:03:11 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:30.561 15:03:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.561 15:03:11 -- common/autotest_common.sh@10 -- # set +x 00:26:30.561 15:03:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.561 15:03:11 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:30.561 15:03:11 -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:30.561 15:03:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:30.561 15:03:11 -- nvmf/common.sh@117 -- # sync 00:26:30.561 15:03:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:30.561 15:03:11 -- nvmf/common.sh@120 -- # set +e 00:26:30.561 15:03:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:30.561 15:03:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:30.561 rmmod nvme_tcp 00:26:30.561 rmmod nvme_fabrics 00:26:30.561 rmmod nvme_keyring 00:26:30.561 15:03:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:30.561 15:03:11 -- nvmf/common.sh@124 -- # set -e 00:26:30.561 15:03:11 -- nvmf/common.sh@125 -- # return 0 00:26:30.561 15:03:11 -- nvmf/common.sh@478 -- # '[' -n 1224466 ']' 00:26:30.561 15:03:11 -- nvmf/common.sh@479 -- # killprocess 1224466 00:26:30.561 15:03:11 -- common/autotest_common.sh@936 -- # '[' -z 1224466 ']' 00:26:30.561 15:03:11 -- common/autotest_common.sh@940 -- # kill -0 1224466 00:26:30.561 15:03:11 -- common/autotest_common.sh@941 -- # uname 00:26:30.561 15:03:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:30.561 15:03:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1224466 00:26:30.561 15:03:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:30.561 15:03:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:30.561 15:03:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1224466' 00:26:30.561 killing process with pid 1224466 00:26:30.561 15:03:11 -- common/autotest_common.sh@955 -- # kill 1224466 00:26:30.561 15:03:11 -- common/autotest_common.sh@960 -- # wait 1224466 00:26:30.561 15:03:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:30.561 15:03:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:30.561 15:03:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:30.561 15:03:11 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:30.561 15:03:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:30.561 15:03:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.561 15:03:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:30.561 15:03:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.503 15:03:13 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:31.503 00:26:31.503 real 0m27.710s 00:26:31.503 user 1m2.986s 00:26:31.503 sys 0m6.903s 00:26:31.503 15:03:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:31.503 15:03:13 -- common/autotest_common.sh@10 -- # set +x 00:26:31.503 ************************************ 00:26:31.503 END TEST nvmf_bdevperf 00:26:31.503 ************************************ 00:26:31.503 15:03:14 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:31.503 15:03:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:31.503 15:03:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:31.503 15:03:14 -- common/autotest_common.sh@10 -- # set +x 00:26:31.764 ************************************ 00:26:31.764 START TEST nvmf_target_disconnect 00:26:31.764 ************************************ 00:26:31.764 15:03:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:31.764 * Looking for test storage... 00:26:31.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:31.764 15:03:14 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:31.764 15:03:14 -- nvmf/common.sh@7 -- # uname -s 00:26:31.764 15:03:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:31.764 15:03:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:31.764 15:03:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:31.764 15:03:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:31.764 15:03:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:31.764 15:03:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:31.764 15:03:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:31.764 15:03:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:31.764 15:03:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:31.764 15:03:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:31.764 15:03:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:31.764 15:03:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:31.764 15:03:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:31.764 15:03:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:31.764 15:03:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:31.764 15:03:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:31.764 15:03:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:31.764 15:03:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:31.764 15:03:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:31.764 15:03:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:31.764 15:03:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.764 15:03:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.764 15:03:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.764 15:03:14 -- paths/export.sh@5 -- # export PATH 00:26:31.764 15:03:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.764 15:03:14 -- nvmf/common.sh@47 -- # : 0 00:26:31.764 15:03:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:31.764 15:03:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:31.764 15:03:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:31.764 15:03:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:31.764 15:03:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:31.764 15:03:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:31.764 15:03:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:31.764 15:03:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:31.764 15:03:14 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:31.764 15:03:14 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:31.764 15:03:14 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:31.764 15:03:14 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:26:31.764 15:03:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:31.764 15:03:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:31.764 15:03:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:31.764 15:03:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:31.764 15:03:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:31.764 15:03:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.764 15:03:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:31.764 15:03:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.764 15:03:14 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:31.764 15:03:14 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:31.764 15:03:14 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:31.764 15:03:14 -- common/autotest_common.sh@10 -- # set +x 00:26:39.907 15:03:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:39.907 15:03:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:39.907 15:03:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:39.907 15:03:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:39.907 15:03:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:39.907 15:03:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:39.907 15:03:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:39.907 15:03:21 -- nvmf/common.sh@295 -- # net_devs=() 00:26:39.907 15:03:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:39.907 15:03:21 -- nvmf/common.sh@296 -- # e810=() 00:26:39.907 15:03:21 -- nvmf/common.sh@296 -- # local -ga e810 00:26:39.907 15:03:21 -- nvmf/common.sh@297 -- # x722=() 00:26:39.907 15:03:21 -- nvmf/common.sh@297 -- # local -ga x722 00:26:39.907 15:03:21 -- nvmf/common.sh@298 -- # mlx=() 00:26:39.907 15:03:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:39.907 15:03:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:39.907 15:03:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:39.907 15:03:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:39.907 15:03:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:39.907 15:03:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:39.907 15:03:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:39.907 15:03:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:39.907 15:03:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:39.907 15:03:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:39.907 15:03:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:39.907 15:03:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:39.907 15:03:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:39.907 15:03:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:39.907 15:03:21 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:39.907 15:03:21 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:39.907 15:03:21 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:39.907 15:03:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:39.907 15:03:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:39.907 15:03:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:39.907 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:39.907 15:03:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:39.907 15:03:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:39.907 15:03:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.907 15:03:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.907 15:03:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:39.907 15:03:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:39.907 15:03:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:39.907 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:39.907 15:03:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:39.907 15:03:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:39.907 15:03:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.907 15:03:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.907 15:03:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:39.907 15:03:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:39.907 15:03:21 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:39.907 15:03:21 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:39.907 15:03:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:39.907 15:03:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.907 15:03:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:39.907 15:03:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.907 15:03:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:39.907 Found net devices under 0000:31:00.0: cvl_0_0 00:26:39.907 15:03:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.907 15:03:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:39.907 15:03:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.907 15:03:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:39.907 15:03:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.907 15:03:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:39.907 Found net devices under 0000:31:00.1: cvl_0_1 00:26:39.907 15:03:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.907 15:03:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:39.907 15:03:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:39.907 15:03:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:39.907 15:03:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:39.907 15:03:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:39.907 15:03:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:39.908 15:03:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:39.908 15:03:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:39.908 15:03:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:39.908 15:03:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:39.908 15:03:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:39.908 15:03:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:39.908 15:03:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:39.908 15:03:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:39.908 15:03:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:39.908 15:03:21 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:39.908 15:03:21 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:39.908 15:03:21 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:39.908 15:03:21 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:39.908 15:03:21 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:39.908 15:03:21 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:39.908 15:03:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:39.908 15:03:21 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:39.908 15:03:21 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:39.908 15:03:21 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:39.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:39.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.732 ms 00:26:39.908 00:26:39.908 --- 10.0.0.2 ping statistics --- 00:26:39.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.908 rtt min/avg/max/mdev = 0.732/0.732/0.732/0.000 ms 00:26:39.908 15:03:21 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:39.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:39.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:26:39.908 00:26:39.908 --- 10.0.0.1 ping statistics --- 00:26:39.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.908 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:26:39.908 15:03:21 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:39.908 15:03:21 -- nvmf/common.sh@411 -- # return 0 00:26:39.908 15:03:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:39.908 15:03:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:39.908 15:03:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:39.908 15:03:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:39.908 15:03:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:39.908 15:03:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:39.908 15:03:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:39.908 15:03:21 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:39.908 15:03:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:39.908 15:03:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:39.908 15:03:21 -- common/autotest_common.sh@10 -- # set +x 00:26:39.908 ************************************ 00:26:39.908 START TEST nvmf_target_disconnect_tc1 00:26:39.908 ************************************ 00:26:39.908 15:03:21 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:26:39.908 15:03:21 -- host/target_disconnect.sh@32 -- # set +e 00:26:39.908 15:03:21 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:39.908 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.908 [2024-04-26 15:03:21.628721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.908 [2024-04-26 15:03:21.629093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.908 [2024-04-26 15:03:21.629109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x74f5f0 with addr=10.0.0.2, port=4420 00:26:39.908 [2024-04-26 15:03:21.629137] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:39.908 [2024-04-26 15:03:21.629153] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:39.908 [2024-04-26 15:03:21.629161] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:39.908 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:39.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:39.908 Initializing NVMe Controllers 00:26:39.908 15:03:21 -- host/target_disconnect.sh@33 -- # trap - ERR 00:26:39.908 15:03:21 -- host/target_disconnect.sh@33 -- # print_backtrace 00:26:39.908 15:03:21 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:26:39.908 15:03:21 -- common/autotest_common.sh@1139 -- # return 0 00:26:39.908 15:03:21 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:26:39.908 15:03:21 -- host/target_disconnect.sh@41 -- # set -e 00:26:39.908 00:26:39.908 real 0m0.075s 00:26:39.908 user 0m0.031s 00:26:39.908 sys 0m0.044s 00:26:39.908 15:03:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:39.908 15:03:21 -- common/autotest_common.sh@10 -- # set +x 00:26:39.908 ************************************ 00:26:39.908 END TEST nvmf_target_disconnect_tc1 00:26:39.908 ************************************ 00:26:39.908 15:03:21 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:39.908 15:03:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:39.908 15:03:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:39.908 15:03:21 -- common/autotest_common.sh@10 -- # set +x 00:26:39.908 ************************************ 00:26:39.908 START TEST nvmf_target_disconnect_tc2 00:26:39.908 ************************************ 00:26:39.908 15:03:21 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:26:39.908 15:03:21 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:26:39.908 15:03:21 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:39.908 15:03:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:39.908 15:03:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:39.908 15:03:21 -- common/autotest_common.sh@10 -- # set +x 00:26:39.908 15:03:21 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:39.908 15:03:21 -- nvmf/common.sh@470 -- # nvmfpid=1231046 00:26:39.908 15:03:21 -- nvmf/common.sh@471 -- # waitforlisten 1231046 00:26:39.908 15:03:21 -- common/autotest_common.sh@817 -- # '[' -z 1231046 ']' 00:26:39.908 15:03:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.908 15:03:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:39.908 15:03:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.908 15:03:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:39.908 15:03:21 -- common/autotest_common.sh@10 -- # set +x 00:26:39.908 [2024-04-26 15:03:21.852126] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:39.908 [2024-04-26 15:03:21.852181] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.908 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.908 [2024-04-26 15:03:21.937164] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:39.908 [2024-04-26 15:03:22.019404] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.908 [2024-04-26 15:03:22.019466] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.908 [2024-04-26 15:03:22.019475] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.908 [2024-04-26 15:03:22.019482] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.908 [2024-04-26 15:03:22.019489] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.908 [2024-04-26 15:03:22.019656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:39.908 [2024-04-26 15:03:22.019816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:39.908 [2024-04-26 15:03:22.019982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:39.908 [2024-04-26 15:03:22.020085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:40.169 15:03:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:40.169 15:03:22 -- common/autotest_common.sh@850 -- # return 0 00:26:40.169 15:03:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:40.169 15:03:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:40.169 15:03:22 -- common/autotest_common.sh@10 -- # set +x 00:26:40.169 15:03:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:40.169 15:03:22 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:40.169 15:03:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.169 15:03:22 -- common/autotest_common.sh@10 -- # set +x 00:26:40.169 Malloc0 00:26:40.169 15:03:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.169 15:03:22 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:40.169 15:03:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.169 15:03:22 -- common/autotest_common.sh@10 -- # set +x 00:26:40.169 [2024-04-26 15:03:22.730143] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.169 15:03:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.169 15:03:22 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:40.169 15:03:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.169 15:03:22 -- common/autotest_common.sh@10 -- # set +x 00:26:40.169 15:03:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.169 15:03:22 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:40.169 15:03:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.169 15:03:22 -- common/autotest_common.sh@10 -- # set +x 00:26:40.169 15:03:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.169 15:03:22 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:40.169 15:03:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.169 15:03:22 -- common/autotest_common.sh@10 -- # set +x 00:26:40.169 [2024-04-26 15:03:22.758489] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:40.169 15:03:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.169 15:03:22 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:40.169 15:03:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.169 15:03:22 -- common/autotest_common.sh@10 -- # set +x 00:26:40.169 15:03:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.169 15:03:22 -- host/target_disconnect.sh@50 -- # reconnectpid=1231138 00:26:40.169 15:03:22 -- host/target_disconnect.sh@52 -- # sleep 2 00:26:40.169 15:03:22 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:40.169 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.736 15:03:24 -- host/target_disconnect.sh@53 -- # kill -9 1231046 00:26:42.736 15:03:24 -- host/target_disconnect.sh@55 -- # sleep 2 00:26:42.736 Read completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Read completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Read completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Read completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Read completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Read completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Read completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Read completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Read completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Read completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Read completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Read completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Read completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Write completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Write completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Write completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Read completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Write completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Read completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Write completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Read completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Write completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Read completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Read completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Write completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Write completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Read completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Read completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Read completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Write completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Write completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 Read completed with error (sct=0, sc=8) 00:26:42.736 starting I/O failed 00:26:42.736 [2024-04-26 15:03:24.786739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:42.736 [2024-04-26 15:03:24.787234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.736 [2024-04-26 15:03:24.787602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.736 [2024-04-26 15:03:24.787614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.736 qpair failed and we were unable to recover it. 00:26:42.736 [2024-04-26 15:03:24.788081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.736 [2024-04-26 15:03:24.788423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.788436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.788720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.789066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.789102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.789334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.789639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.789654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.790001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.790268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.790278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.790581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.790873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.790883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.791212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.791519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.791529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.791754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.791813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.791822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.792063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.792387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.792397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.792733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.793048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.793058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.793419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.793642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.793651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.793898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.794134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.794144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.794488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.794792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.794801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.795027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.795416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.795428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.795634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.795932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.795942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.796312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.796621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.796631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.796970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.797198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.797209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.797543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.797850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.797860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.798170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.798369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.798379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.798734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.799033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.799044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.799262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.799588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.799598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.799895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.800210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.800219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.800919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.801261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.801272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.801581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.801882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.801894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.802234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.802553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.802562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.802863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.803165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.803174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.803332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.803666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.803675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.803986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.804314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.804323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.804656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.804969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.804979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.737 qpair failed and we were unable to recover it. 00:26:42.737 [2024-04-26 15:03:24.805156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.805449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.737 [2024-04-26 15:03:24.805458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.805781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.806090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.806100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.806444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.806644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.806653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.806949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.807215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.807224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.807568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.807928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.807937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.808361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.808625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.808634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.808967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.809298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.809307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.809628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.809813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.809822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.810179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.810523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.810535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.810907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.811203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.811214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.811555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.811754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.811765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.812092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.812457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.812469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.812768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.813057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.813068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.813371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.813667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.813678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.813903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.814191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.814202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.814586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.814901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.814913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.815241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.815506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.815518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.815809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.816138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.816150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.816498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.816810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.816822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.817161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.817482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.817494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.817850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.818200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.818212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.818579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.818899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.818912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.819090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.819447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.819458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.819749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.820084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.820096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.820402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.820682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.820693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.821003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.821204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.821217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.821378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.821735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.821747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.822053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.822386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.822397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.822696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.823102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.738 [2024-04-26 15:03:24.823119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.738 qpair failed and we were unable to recover it. 00:26:42.738 [2024-04-26 15:03:24.823457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.823857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.823873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.824175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.824485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.824500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.824804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.825137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.825154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.825472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.825794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.825810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.826182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.826532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.826547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.826900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.827135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.827151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.827464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.827780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.827795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.828182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.828380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.828395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.828710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.829017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.829033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.829335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.829658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.829673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.830042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.830403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.830418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.830717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.831063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.831079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.831381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.831706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.831721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.832106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.832442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.832457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.832750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.832944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.832962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.833242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.833586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.833601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.833903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.834212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.834228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.834547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.834875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.834891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.835217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.835580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.835596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.835921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.836258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.836273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.836580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.836896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.836916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.837259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.837615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.837634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.837982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.838308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.838326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.838645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.838965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.838985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.839184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.839486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.839506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.839834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.840163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.840183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.840518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.840865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.840886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.841266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.841611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.841631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.739 qpair failed and we were unable to recover it. 00:26:42.739 [2024-04-26 15:03:24.841973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.739 [2024-04-26 15:03:24.842271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.842290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.842607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.843006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.843027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.843360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.843709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.843729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.844041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.844369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.844388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.844701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.845019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.845039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.845366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.845725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.845744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.846083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.846405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.846424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.846626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.846945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.846966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.847305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.847628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.847647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.847990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.848336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.848355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.848667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.848994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.849013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.849204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.849520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.849539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.849882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.850270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.850289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.850657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.851012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.851040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.851378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.851726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.851752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.852181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.852493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.852520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.852791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.853065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.853093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.853461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.853810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.853844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.854186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.854536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.854563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.854928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.855271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.855297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.855667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.856070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.856098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.856448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.856847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.856874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.857207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.857880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.857919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.858268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.858620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.858647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.859001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.859350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.859376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.859649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.859996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.860024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.860330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.860681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.860711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.861085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.861440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.861467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.861845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.862091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.740 [2024-04-26 15:03:24.862118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.740 qpair failed and we were unable to recover it. 00:26:42.740 [2024-04-26 15:03:24.862494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.862832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.862869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.741 qpair failed and we were unable to recover it. 00:26:42.741 [2024-04-26 15:03:24.863319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.863541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.863567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.741 qpair failed and we were unable to recover it. 00:26:42.741 [2024-04-26 15:03:24.863931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.864289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.864316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.741 qpair failed and we were unable to recover it. 00:26:42.741 [2024-04-26 15:03:24.864613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.864878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.864909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.741 qpair failed and we were unable to recover it. 00:26:42.741 [2024-04-26 15:03:24.865148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.865531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.865558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.741 qpair failed and we were unable to recover it. 00:26:42.741 [2024-04-26 15:03:24.865917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.866299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.866327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.741 qpair failed and we were unable to recover it. 00:26:42.741 [2024-04-26 15:03:24.866684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.867015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.867043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.741 qpair failed and we were unable to recover it. 00:26:42.741 [2024-04-26 15:03:24.867423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.867746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.867772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.741 qpair failed and we were unable to recover it. 00:26:42.741 [2024-04-26 15:03:24.868052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.868408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.868434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.741 qpair failed and we were unable to recover it. 00:26:42.741 [2024-04-26 15:03:24.868802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.869182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.869210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.741 qpair failed and we were unable to recover it. 00:26:42.741 [2024-04-26 15:03:24.869577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.869911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.869939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.741 qpair failed and we were unable to recover it. 00:26:42.741 [2024-04-26 15:03:24.870308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.870643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.870669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.741 qpair failed and we were unable to recover it. 00:26:42.741 [2024-04-26 15:03:24.871003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.871277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.871302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.741 qpair failed and we were unable to recover it. 00:26:42.741 [2024-04-26 15:03:24.871656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.871976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.872004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.741 qpair failed and we were unable to recover it. 00:26:42.741 [2024-04-26 15:03:24.872229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.872612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.872639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.741 qpair failed and we were unable to recover it. 00:26:42.741 [2024-04-26 15:03:24.873015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.873370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.873396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.741 qpair failed and we were unable to recover it. 00:26:42.741 [2024-04-26 15:03:24.873728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.874075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.874102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.741 qpair failed and we were unable to recover it. 00:26:42.741 [2024-04-26 15:03:24.874448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.874795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.874822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.741 qpair failed and we were unable to recover it. 00:26:42.741 [2024-04-26 15:03:24.875196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.875520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.875546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.741 qpair failed and we were unable to recover it. 00:26:42.741 [2024-04-26 15:03:24.875900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.876259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.876286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.741 qpair failed and we were unable to recover it. 00:26:42.741 [2024-04-26 15:03:24.876625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.876985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.877013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.741 qpair failed and we were unable to recover it. 00:26:42.741 [2024-04-26 15:03:24.877397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.877717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.877744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.741 qpair failed and we were unable to recover it. 00:26:42.741 [2024-04-26 15:03:24.878084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.878396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.741 [2024-04-26 15:03:24.878422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.741 qpair failed and we were unable to recover it. 00:26:42.741 [2024-04-26 15:03:24.878787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.879109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.879137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.742 [2024-04-26 15:03:24.879501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.879859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.879888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.742 [2024-04-26 15:03:24.880121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.880434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.880460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.742 [2024-04-26 15:03:24.880909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.881224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.881251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.742 [2024-04-26 15:03:24.881581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.881918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.881945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.742 [2024-04-26 15:03:24.882308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.882662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.882687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.742 [2024-04-26 15:03:24.883029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.883387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.883413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.742 [2024-04-26 15:03:24.883750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.884078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.884108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.742 [2024-04-26 15:03:24.884455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.884775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.884802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.742 [2024-04-26 15:03:24.885220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.885557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.885583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.742 [2024-04-26 15:03:24.885988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.886344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.886370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.742 [2024-04-26 15:03:24.886727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.886990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.887018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.742 [2024-04-26 15:03:24.887393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.887738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.887765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.742 [2024-04-26 15:03:24.888100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.888450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.888478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.742 [2024-04-26 15:03:24.888833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.889164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.889191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.742 [2024-04-26 15:03:24.889553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.889905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.889932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.742 [2024-04-26 15:03:24.890313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.890670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.890702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.742 [2024-04-26 15:03:24.891039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.891372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.891398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.742 [2024-04-26 15:03:24.891651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.891992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.892021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.742 [2024-04-26 15:03:24.892385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.892738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.892764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.742 [2024-04-26 15:03:24.893142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.893494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.893521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.742 [2024-04-26 15:03:24.893885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.894222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.894248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.742 [2024-04-26 15:03:24.894603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.894956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.894983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.742 [2024-04-26 15:03:24.895344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.895582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.895608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.742 [2024-04-26 15:03:24.895974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.896299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.896325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.742 [2024-04-26 15:03:24.896641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.896959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.896986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.742 [2024-04-26 15:03:24.897330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.897649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.897680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.742 [2024-04-26 15:03:24.898025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.898361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.742 [2024-04-26 15:03:24.898386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.742 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.898811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.899150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.899178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.899434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.899772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.899799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.900223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.900548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.900574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.900905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.901246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.901273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.901639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.901861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.901891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.902297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.902636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.902662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.903094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.903401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.903427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.903782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.904157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.904185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.904558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.904910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.904944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.905191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.905550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.905577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.905924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.906296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.906323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.906683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.907002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.907030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.907386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.907715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.907742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.908092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.908455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.908482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.908847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.909185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.909212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.909560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.909893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.909921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.910269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.910587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.910614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.910855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.911198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.911225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.911546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.911914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.911946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.912295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.912542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.912568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.912860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.913182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.913209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.913570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.913790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.913819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.914197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.914550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.914577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.914900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.915230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.915256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.915592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.915917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.915945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.916282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.916644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.916671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.917015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.917368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.917395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.917764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.918088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.743 [2024-04-26 15:03:24.918116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.743 qpair failed and we were unable to recover it. 00:26:42.743 [2024-04-26 15:03:24.918487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.918762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.918788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.919159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.919502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.919528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.919866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.920247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.920273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.920675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.921017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.921044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.921447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.921767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.921794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.922169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.922544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.922570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.922963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.923317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.923344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.923702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.924042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.924069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.924451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.924770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.924796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.925158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.925478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.925504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.925865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.926182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.926209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.926582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.926923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.926950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.927317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.927668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.927694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.928044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.928285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.928314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.928684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.928968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.928994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.929315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.929665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.929692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.929921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.930181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.930208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.930576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.930900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.930927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.931285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.931632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.931659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.932039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.932371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.932397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.932798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.933157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.933184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.933527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.933880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.933908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.934293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.934615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.934641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.935051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.935366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.935393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.935725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.936071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.936099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.936418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.936813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.936845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.937181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.937500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.937526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.937879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.938267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.938293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.744 qpair failed and we were unable to recover it. 00:26:42.744 [2024-04-26 15:03:24.938637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.744 [2024-04-26 15:03:24.938988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.939015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.745 qpair failed and we were unable to recover it. 00:26:42.745 [2024-04-26 15:03:24.939371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.939717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.939743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.745 qpair failed and we were unable to recover it. 00:26:42.745 [2024-04-26 15:03:24.940086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.940439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.940465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.745 qpair failed and we were unable to recover it. 00:26:42.745 [2024-04-26 15:03:24.940814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.941058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.941087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.745 qpair failed and we were unable to recover it. 00:26:42.745 [2024-04-26 15:03:24.941455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.941779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.941806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.745 qpair failed and we were unable to recover it. 00:26:42.745 [2024-04-26 15:03:24.942106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.942464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.942490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.745 qpair failed and we were unable to recover it. 00:26:42.745 [2024-04-26 15:03:24.942743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.943079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.943110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.745 qpair failed and we were unable to recover it. 00:26:42.745 [2024-04-26 15:03:24.943478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.943731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.943757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.745 qpair failed and we were unable to recover it. 00:26:42.745 [2024-04-26 15:03:24.944114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.944540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.944566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.745 qpair failed and we were unable to recover it. 00:26:42.745 [2024-04-26 15:03:24.944942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.945318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.945345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.745 qpair failed and we were unable to recover it. 00:26:42.745 [2024-04-26 15:03:24.945718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.946044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.946072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.745 qpair failed and we were unable to recover it. 00:26:42.745 [2024-04-26 15:03:24.946319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.946668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.946695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.745 qpair failed and we were unable to recover it. 00:26:42.745 [2024-04-26 15:03:24.946927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.947095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.947125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.745 qpair failed and we were unable to recover it. 00:26:42.745 [2024-04-26 15:03:24.947516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.947865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.947893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.745 qpair failed and we were unable to recover it. 00:26:42.745 [2024-04-26 15:03:24.948150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.948522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.948549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.745 qpair failed and we were unable to recover it. 00:26:42.745 [2024-04-26 15:03:24.948889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.949277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.949304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.745 qpair failed and we were unable to recover it. 00:26:42.745 [2024-04-26 15:03:24.949552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.949916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.949944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.745 qpair failed and we were unable to recover it. 00:26:42.745 [2024-04-26 15:03:24.950281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.950642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.950669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.745 qpair failed and we were unable to recover it. 00:26:42.745 [2024-04-26 15:03:24.951026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.951380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.951406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.745 qpair failed and we were unable to recover it. 00:26:42.745 [2024-04-26 15:03:24.951745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.952099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.952127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.745 qpair failed and we were unable to recover it. 00:26:42.745 [2024-04-26 15:03:24.952470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.952827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.952863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.745 qpair failed and we were unable to recover it. 00:26:42.745 [2024-04-26 15:03:24.953214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.953542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.953569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.745 qpair failed and we were unable to recover it. 00:26:42.745 [2024-04-26 15:03:24.953914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.954250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.954277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.745 qpair failed and we were unable to recover it. 00:26:42.745 [2024-04-26 15:03:24.954613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.954982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.955010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.745 qpair failed and we were unable to recover it. 00:26:42.745 [2024-04-26 15:03:24.955246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.955605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.745 [2024-04-26 15:03:24.955632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.955965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.956337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.956364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.956610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.956923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.956951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.957314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.957668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.957694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.957990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.958336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.958362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.958592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.958710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.958739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.959040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.959361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.959388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.959673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.960051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.960079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.960326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.960708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.960734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.961115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.961472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.961499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.961824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.962170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.962197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.962494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.962823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.962858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.963242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.963605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.963632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.964076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.964393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.964419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.964773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.965121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.965148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.965479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.965720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.965750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.966132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.966350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.966380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.966804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.967173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.967201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.967544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.967831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.967868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.968225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.968572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.968599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.968950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.969310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.969336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.969686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.970023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.970050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.970421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.970655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.970682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.971023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.971365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.971391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.971746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.972085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.972113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.972441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.972666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.972692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.973039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.973396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.973423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.973739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.973984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.974014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.746 [2024-04-26 15:03:24.974364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.974708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-04-26 15:03:24.974735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.746 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.975086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.975444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.975471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.975805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.976167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.976195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.976554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.976904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.976932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.977300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.977620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.977647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.977988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.978353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.978380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.978616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.978891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.978918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.979249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.979616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.979643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.979973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.980309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.980336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.980688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.981028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.981055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.981417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.981762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.981789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.982113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.982503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.982530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.982790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.983143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.983172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.983533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.983890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.983918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.984159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.984531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.984558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.984930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.985292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.985319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.985677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.986022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.986049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.986415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.986644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.986670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.987017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.987361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.987387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.987744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.988097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.988126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.988489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.988855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.988883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.989103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.989494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.989522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.989883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.990249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.990275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.990500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.990812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.990846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.991205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.991557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.991584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.991921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.992240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.992267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.992599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.992901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.992929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.993291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.993637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.993663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.747 [2024-04-26 15:03:24.994013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.994360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-04-26 15:03:24.994387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.747 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:24.994709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:24.995063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:24.995090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:24.995445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:24.995807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:24.995833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:24.996190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:24.996460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:24.996487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:24.996854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:24.997195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:24.997221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:24.997560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:24.998008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:24.998036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:24.998392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:24.998623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:24.998652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:24.999007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:24.999341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:24.999368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:24.999739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.000060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.000089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:25.000334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.000684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.000712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:25.001070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.001416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.001444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:25.001798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.002124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.002152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:25.002483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.002860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.002888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:25.003266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.003567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.003599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:25.003728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.004107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.004135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:25.004468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.004808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.004835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:25.005202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.005526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.005553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:25.005929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.006209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.006235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:25.006591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.006914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.006943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:25.007056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.007442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.007469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:25.007810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.008226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.008253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:25.008613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.008970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.008998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:25.009360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.009701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.009727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:25.010025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.010365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.010398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:25.010727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.011114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.011143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:25.011499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.011873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.011901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:25.012262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.012575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.012601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:25.012830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.013214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.013242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:25.013613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.013981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.014008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.748 qpair failed and we were unable to recover it. 00:26:42.748 [2024-04-26 15:03:25.014431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-04-26 15:03:25.014755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.014781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.015102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.015457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.015484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.015826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.016182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.016209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.016567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.016920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.016948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.017162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.017540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.017573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.017936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.018288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.018315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.018654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.018975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.019008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.019334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.019670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.019696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.020047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.020276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.020305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.020669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.021019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.021048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.021410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.021637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.021665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.022019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.022398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.022427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.022785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.023159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.023188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.023537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.023875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.023903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.024273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.024661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.024694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.025069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.025428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.025455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.025812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.026146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.026174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.026573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.027007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.027035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.027425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.027783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.027810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.028064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.028279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.028307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.028655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.029014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.029042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.029297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.029704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.029731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.029969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.030312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.030339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.030683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.031020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.031049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.031432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.031780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.031807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.032222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.032467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.032493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.032863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.033236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.033262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.033563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.033912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.033940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.034339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.034587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.749 [2024-04-26 15:03:25.034613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.749 qpair failed and we were unable to recover it. 00:26:42.749 [2024-04-26 15:03:25.034990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.035291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.035319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.750 qpair failed and we were unable to recover it. 00:26:42.750 [2024-04-26 15:03:25.035684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.036019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.036046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.750 qpair failed and we were unable to recover it. 00:26:42.750 [2024-04-26 15:03:25.036406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.036709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.036735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.750 qpair failed and we were unable to recover it. 00:26:42.750 [2024-04-26 15:03:25.037147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.037381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.037410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.750 qpair failed and we were unable to recover it. 00:26:42.750 [2024-04-26 15:03:25.037701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.038024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.038052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.750 qpair failed and we were unable to recover it. 00:26:42.750 [2024-04-26 15:03:25.038375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.038705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.038731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.750 qpair failed and we were unable to recover it. 00:26:42.750 [2024-04-26 15:03:25.039083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.039430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.039460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.750 qpair failed and we were unable to recover it. 00:26:42.750 [2024-04-26 15:03:25.039716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.039984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.040012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.750 qpair failed and we were unable to recover it. 00:26:42.750 [2024-04-26 15:03:25.040337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.040591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.040617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.750 qpair failed and we were unable to recover it. 00:26:42.750 [2024-04-26 15:03:25.040985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.041302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.041328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.750 qpair failed and we were unable to recover it. 00:26:42.750 [2024-04-26 15:03:25.041697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.042016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.042044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.750 qpair failed and we were unable to recover it. 00:26:42.750 [2024-04-26 15:03:25.042297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.042651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.042677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.750 qpair failed and we were unable to recover it. 00:26:42.750 [2024-04-26 15:03:25.043030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.043387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.043414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.750 qpair failed and we were unable to recover it. 00:26:42.750 [2024-04-26 15:03:25.043782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.044119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.044146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.750 qpair failed and we were unable to recover it. 00:26:42.750 [2024-04-26 15:03:25.044506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.044718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.044744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.750 qpair failed and we were unable to recover it. 00:26:42.750 [2024-04-26 15:03:25.044912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.045142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.045169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.750 qpair failed and we were unable to recover it. 00:26:42.750 [2024-04-26 15:03:25.045454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.045804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.045831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.750 qpair failed and we were unable to recover it. 00:26:42.750 [2024-04-26 15:03:25.046096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.046429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.046456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.750 qpair failed and we were unable to recover it. 00:26:42.750 [2024-04-26 15:03:25.046611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.046975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.047004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.750 qpair failed and we were unable to recover it. 00:26:42.750 [2024-04-26 15:03:25.047233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.047578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.047605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.750 qpair failed and we were unable to recover it. 00:26:42.750 [2024-04-26 15:03:25.048004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.048250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.048279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.750 qpair failed and we were unable to recover it. 00:26:42.750 [2024-04-26 15:03:25.048536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.048846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.048875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.750 qpair failed and we were unable to recover it. 00:26:42.750 [2024-04-26 15:03:25.049231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.049558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.049585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.750 qpair failed and we were unable to recover it. 00:26:42.750 [2024-04-26 15:03:25.049942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.050316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.050343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.750 qpair failed and we were unable to recover it. 00:26:42.750 [2024-04-26 15:03:25.050735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.750 [2024-04-26 15:03:25.050985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.051012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.051247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.051362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.051388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.051722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.051974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.052001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.052239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.052464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.052490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.052856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.053213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.053240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.053563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.053924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.053952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.054327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.054651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.054678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.054915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.055152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.055181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.055553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.055899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.055927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.056365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.056716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.056743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.057104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.057433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.057460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.057801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.058177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.058205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.058448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.058834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.058872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.059236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.059561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.059588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.059957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.060321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.060350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.060485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.060872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.060900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.061292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.061503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.061529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.061944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.062308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.062336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.062756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.063013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.063042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.063306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.063674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.063700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.063945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.064302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.064329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.064553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.064785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.064811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.065264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.065583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.065609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.065987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.066346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.066374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.066749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.067087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.067115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.067241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.067630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.067658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.067896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.068282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.068309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.068581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.068974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.069002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.069365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.069599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.751 [2024-04-26 15:03:25.069625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.751 qpair failed and we were unable to recover it. 00:26:42.751 [2024-04-26 15:03:25.070012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.070366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.070393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.070750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.071102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.071130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.071490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.071856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.071884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.072251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.072472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.072500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.072900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.073184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.073211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.073587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.074018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.074047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.074410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.074767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.074795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.075140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.075473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.075499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.075865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.076193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.076220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.076637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.076973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.077001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.077347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.077687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.077714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.078109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.078469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.078496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.078831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.079229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.079257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.079526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.079854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.079883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.080277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.080631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.080657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.080833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.081229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.081256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.081633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.081988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.082016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.082458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.082749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.082776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.083175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.083556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.083583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.083933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.084294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.084321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.084701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.084966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.084994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.085238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.085599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.085625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.085971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.086330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.086357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.086725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.086979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.087006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.087345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.087682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.087709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.088077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.088458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.088485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.088741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.088987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.089014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.089356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.089585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.089613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.752 qpair failed and we were unable to recover it. 00:26:42.752 [2024-04-26 15:03:25.089975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.752 [2024-04-26 15:03:25.090343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.090371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.090707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.091056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.091083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.091450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.091801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.091830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.092219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.092552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.092579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.092933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.093182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.093208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.093569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.093773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.093800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.094087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.094445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.094471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.094821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.095178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.095206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.095611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.095977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.096005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.096243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.096484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.096511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.096900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.097250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.097276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.097651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.097885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.097912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.098346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.098682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.098709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.099016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.099383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.099410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.099768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.100129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.100158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.100513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.100869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.100897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.101268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.101589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.101615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.101977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.102216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.102242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.102567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.102912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.102940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.103308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.103644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.103671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.104038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.104361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.104388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.104604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.104917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.104945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.105259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.105618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.105645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.105986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.106354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.106380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.106720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.106855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.106885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.107284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.107519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.107554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.107784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.108123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.108151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.108502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.108819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.108854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.109211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.109547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.753 [2024-04-26 15:03:25.109575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.753 qpair failed and we were unable to recover it. 00:26:42.753 [2024-04-26 15:03:25.109932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.110304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.110331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.110696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.111031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.111059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.111434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.111786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.111813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.112232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.112653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.112680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.113026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.113368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.113395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.113731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.114083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.114111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.114362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.114739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.114771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.115009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.115247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.115275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.115641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.115986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.116014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.116379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.116706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.116733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.117080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.117422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.117449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.117798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.118148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.118176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.118536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.118781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.118808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.119183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.119420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.119446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.119796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.120010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.120040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.120400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.120635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.120662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.121008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.121358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.121389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.121759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.122122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.122149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.122397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.122749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.122775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.123179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.123528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.123555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.123913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.124241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.124268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.124546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.124904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.124932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.125285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.125645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.125671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.125906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.126295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.126322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.126516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.126889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.126916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.127299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.127640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.127667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.127917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.128282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.128313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.128669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.129024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.129051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.754 [2024-04-26 15:03:25.129412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.129736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.754 [2024-04-26 15:03:25.129763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.754 qpair failed and we were unable to recover it. 00:26:42.755 [2024-04-26 15:03:25.130098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.130453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.130479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.755 qpair failed and we were unable to recover it. 00:26:42.755 [2024-04-26 15:03:25.130855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.131207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.131234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.755 qpair failed and we were unable to recover it. 00:26:42.755 [2024-04-26 15:03:25.131590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.131952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.131979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.755 qpair failed and we were unable to recover it. 00:26:42.755 [2024-04-26 15:03:25.132405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.132754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.132780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.755 qpair failed and we were unable to recover it. 00:26:42.755 [2024-04-26 15:03:25.133191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.133534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.133561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.755 qpair failed and we were unable to recover it. 00:26:42.755 [2024-04-26 15:03:25.133921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.134245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.134271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.755 qpair failed and we were unable to recover it. 00:26:42.755 [2024-04-26 15:03:25.134601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.134900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.134926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.755 qpair failed and we were unable to recover it. 00:26:42.755 [2024-04-26 15:03:25.135155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.135537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.135563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.755 qpair failed and we were unable to recover it. 00:26:42.755 [2024-04-26 15:03:25.135816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.136175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.136202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.755 qpair failed and we were unable to recover it. 00:26:42.755 [2024-04-26 15:03:25.136563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.136924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.136952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.755 qpair failed and we were unable to recover it. 00:26:42.755 [2024-04-26 15:03:25.137314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.137636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.137662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.755 qpair failed and we were unable to recover it. 00:26:42.755 [2024-04-26 15:03:25.138019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.138268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.138298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.755 qpair failed and we were unable to recover it. 00:26:42.755 [2024-04-26 15:03:25.138647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.138973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.139000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.755 qpair failed and we were unable to recover it. 00:26:42.755 [2024-04-26 15:03:25.139372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.139601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.139633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.755 qpair failed and we were unable to recover it. 00:26:42.755 [2024-04-26 15:03:25.139962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.140162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.140191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.755 qpair failed and we were unable to recover it. 00:26:42.755 [2024-04-26 15:03:25.140551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.140890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.140917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.755 qpair failed and we were unable to recover it. 00:26:42.755 [2024-04-26 15:03:25.141296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.141620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.141648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.755 qpair failed and we were unable to recover it. 00:26:42.755 [2024-04-26 15:03:25.142060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.142424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.142451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.755 qpair failed and we were unable to recover it. 00:26:42.755 [2024-04-26 15:03:25.142713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.143034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.143062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.755 qpair failed and we were unable to recover it. 00:26:42.755 [2024-04-26 15:03:25.143309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.143666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.143692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.755 qpair failed and we were unable to recover it. 00:26:42.755 [2024-04-26 15:03:25.143943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.144302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.144328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.755 qpair failed and we were unable to recover it. 00:26:42.755 [2024-04-26 15:03:25.144663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.144908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.144935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.755 qpair failed and we were unable to recover it. 00:26:42.755 [2024-04-26 15:03:25.145285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.145620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.145647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.755 qpair failed and we were unable to recover it. 00:26:42.755 [2024-04-26 15:03:25.146010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.146365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.755 [2024-04-26 15:03:25.146392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.755 qpair failed and we were unable to recover it. 00:26:42.755 [2024-04-26 15:03:25.146735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.147059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.147087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.147458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.147853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.147881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.148196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.148545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.148572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.148928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.149161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.149189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.149538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.149870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.149897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.150183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.150396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.150425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.150788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.151129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.151157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.151498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.151847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.151875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.152210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.152563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.152589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.152886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.153221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.153248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.153623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.153975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.154002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.154234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.154594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.154620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.154966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.155319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.155346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.155593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.155937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.155965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.156333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.156560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.156589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.156834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.157201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.157228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.157501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.157893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.157920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.158225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.158566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.158592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.158933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.159305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.159331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.159696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.159973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.160001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.160227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.160601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.160627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.160992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.161336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.161363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.161681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.161957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.161985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.162192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.162535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.162562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.162931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.163286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.163312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.163660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.164010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.164038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.164373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.164711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.164738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.165095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.165448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.165474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.756 qpair failed and we were unable to recover it. 00:26:42.756 [2024-04-26 15:03:25.165818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.166158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.756 [2024-04-26 15:03:25.166186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.166539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.166760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.166786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.167148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.167510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.167536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.167897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.168268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.168295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.168667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.169002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.169029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.169391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.169665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.169693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.170040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.170384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.170411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.170763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.171097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.171125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.171484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.171856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.171885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.172249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.172586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.172613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.173017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.173381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.173407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.173758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.174094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.174121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.174470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.174736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.174761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.175025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.175347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.175374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.175718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.175951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.175981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.176407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.176731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.176758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.177139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.177479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.177506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.177865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.178202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.178229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.178570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.178932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.178960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.179357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.179709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.179736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.180084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.180426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.180452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.180795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.181035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.181063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.181438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.181801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.181828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.182184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.182526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.182552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.182774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.183152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.183181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.183539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.183901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.183928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.184301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.184503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.184533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.184887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.185140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.185169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.185548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.185879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.185907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.757 qpair failed and we were unable to recover it. 00:26:42.757 [2024-04-26 15:03:25.186269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.757 [2024-04-26 15:03:25.186626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.186653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.186885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.187163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.187190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.187552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.187911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.187938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.188304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.188609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.188636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.189075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.189431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.189459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.189824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.190151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.190179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.190538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.190869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.190897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.191134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.191417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.191444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.191657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.192017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.192045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.192400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.192775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.192801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.193149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.193498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.193524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.193886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.194305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.194332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.194760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.195079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.195108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.195481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.195828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.195864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.196188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.196518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.196545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.196875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.197207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.197233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.197483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.197883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.197911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.198252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.198611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.198638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.198996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.199359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.199387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.199743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.200081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.200109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.200491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.200853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.200880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.201096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.201462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.201488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.201856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.202184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.202211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.202592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.202925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.202954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.203387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.203708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.203734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.204153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.204513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.204539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.204893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.205267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.205295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.205647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.206000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.206028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.758 [2024-04-26 15:03:25.206390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.206746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.758 [2024-04-26 15:03:25.206772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.758 qpair failed and we were unable to recover it. 00:26:42.759 [2024-04-26 15:03:25.207131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.207489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.207515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.759 qpair failed and we were unable to recover it. 00:26:42.759 [2024-04-26 15:03:25.207881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.208222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.208249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.759 qpair failed and we were unable to recover it. 00:26:42.759 [2024-04-26 15:03:25.208594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.208921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.208949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.759 qpair failed and we were unable to recover it. 00:26:42.759 [2024-04-26 15:03:25.209302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.209634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.209660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.759 qpair failed and we were unable to recover it. 00:26:42.759 [2024-04-26 15:03:25.210032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.210265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.210294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.759 qpair failed and we were unable to recover it. 00:26:42.759 [2024-04-26 15:03:25.210658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.210866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.210895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.759 qpair failed and we were unable to recover it. 00:26:42.759 [2024-04-26 15:03:25.211237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.211590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.211616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.759 qpair failed and we were unable to recover it. 00:26:42.759 [2024-04-26 15:03:25.212071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.212448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.212474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.759 qpair failed and we were unable to recover it. 00:26:42.759 [2024-04-26 15:03:25.212733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.213067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.213096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.759 qpair failed and we were unable to recover it. 00:26:42.759 [2024-04-26 15:03:25.213436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.213786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.213812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.759 qpair failed and we were unable to recover it. 00:26:42.759 [2024-04-26 15:03:25.214165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.214508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.214535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.759 qpair failed and we were unable to recover it. 00:26:42.759 [2024-04-26 15:03:25.214881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.215108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.215134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.759 qpair failed and we were unable to recover it. 00:26:42.759 [2024-04-26 15:03:25.215511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.215875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.215903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.759 qpair failed and we were unable to recover it. 00:26:42.759 [2024-04-26 15:03:25.216149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.216532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.216559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.759 qpair failed and we were unable to recover it. 00:26:42.759 [2024-04-26 15:03:25.216907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.217131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.217160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.759 qpair failed and we were unable to recover it. 00:26:42.759 [2024-04-26 15:03:25.217531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.217951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.217980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.759 qpair failed and we were unable to recover it. 00:26:42.759 [2024-04-26 15:03:25.218406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.218726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.218752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.759 qpair failed and we were unable to recover it. 00:26:42.759 [2024-04-26 15:03:25.219053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.219393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.219420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.759 qpair failed and we were unable to recover it. 00:26:42.759 [2024-04-26 15:03:25.219770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.220113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.220147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.759 qpair failed and we were unable to recover it. 00:26:42.759 [2024-04-26 15:03:25.220403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.220768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.220795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.759 qpair failed and we were unable to recover it. 00:26:42.759 [2024-04-26 15:03:25.221226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.221583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.221610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.759 qpair failed and we were unable to recover it. 00:26:42.759 [2024-04-26 15:03:25.221969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.222301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.222327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.759 qpair failed and we were unable to recover it. 00:26:42.759 [2024-04-26 15:03:25.222726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.223064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.223091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.759 qpair failed and we were unable to recover it. 00:26:42.759 [2024-04-26 15:03:25.223423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.223778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.223804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.759 qpair failed and we were unable to recover it. 00:26:42.759 [2024-04-26 15:03:25.224180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.224521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.759 [2024-04-26 15:03:25.224547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.224911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.225253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.225280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.225646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.225980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.226007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.226340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.226685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.226712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.227051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.227416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.227448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.227805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.228198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.228225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.228574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.228907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.228935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.229281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.229608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.229635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.229986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.230160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.230186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.230547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.230892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.230919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.231307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.231663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.231690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.231912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.232301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.232327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.232695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.233049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.233077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.233477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.233830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.233866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.234217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.234579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.234611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.234985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.235341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.235367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.235714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.235936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.235967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.236313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.236650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.236676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.236942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.237294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.237321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.237697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.238030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.238058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.238438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.238789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.238815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.239239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.239594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.239625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.239970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.240318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.240344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.240595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.240929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.240957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.241322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.241711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.241744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.242102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.242357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.242384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.242755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.243111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.243139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.243502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.243855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.243883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.244270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.244602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.760 [2024-04-26 15:03:25.244628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.760 qpair failed and we were unable to recover it. 00:26:42.760 [2024-04-26 15:03:25.244975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.245325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.245351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.245696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.245925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.245956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.246181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.246542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.246569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.246937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.247289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.247315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.247674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.248004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.248032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.248433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.248669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.248697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.249059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.249418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.249444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.249820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.250160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.250188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.250426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.250777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.250804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.251175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.251509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.251536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.251882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.252244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.252271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.252656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.252888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.252916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.253265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.253609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.253635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.253993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.254234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.254264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.254621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.254983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.255010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.255378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.255754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.255780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.256116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.256435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.256461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.256805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.257147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.257175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.257501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.257822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.257859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.258193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.258544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.258571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.258932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.259323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.259350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.259716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.259982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.260009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.260332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.260570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.260599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.260854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.261207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.261234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.261583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.261913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.261941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.262291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.262653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.262679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.263039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.263483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.263509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.263894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.264249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.264275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.761 [2024-04-26 15:03:25.264639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.264947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.761 [2024-04-26 15:03:25.264976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.761 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.265358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.265591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.265617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.265866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.266243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.266270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.266653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.267007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.267035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.267373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.267730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.267756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.268145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.268498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.268525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.268774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.269136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.269163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.269480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.269894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.269921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.270179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.270501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.270528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.270879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.271239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.271266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.271627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.271989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.272017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.272393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.272615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.272641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.272866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.273248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.273275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.273648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.274055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.274084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.274441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.274802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.274828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.275197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.275539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.275565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.275925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.276292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.276319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.276562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.276908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.276935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.277176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.277544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.277571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.277951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.278294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.278321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.278685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.279028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.279055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.279401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.279765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.279791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.280064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.280433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.280460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.280812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.281066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.281095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.281477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.281822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.281859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.282211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.282409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.282435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.282667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.283025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.283054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.283310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.283674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.283700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.284084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.284436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.284463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.762 qpair failed and we were unable to recover it. 00:26:42.762 [2024-04-26 15:03:25.284836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.762 [2024-04-26 15:03:25.285203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.285230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.285595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.285959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.285987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.286339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.286666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.286692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.286959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.287322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.287349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.287753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.288108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.288136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.288493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.288858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.288886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.289248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.289504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.289530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.289764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.290016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.290044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.290382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.290742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.290769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.291056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.291412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.291438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.291817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.292069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.292097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.292451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.292817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.292865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.293144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.293480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.293506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.293865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.294097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.294127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.294508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.294860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.294889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.295226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.295565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.295592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.295947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.296309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.296335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.296699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.296926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.296953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.297355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.297702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.297728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.298103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.298448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.298475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.298855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.299202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.299229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.299519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.299915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.299944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.300323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.300656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.300683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.301037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.301385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.301410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.301769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.302107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.302135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.302497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.302812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.302848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.303229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.303468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.303498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.303858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.304089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.304115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.304522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.304875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.763 [2024-04-26 15:03:25.304905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.763 qpair failed and we were unable to recover it. 00:26:42.763 [2024-04-26 15:03:25.305229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.305468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.305494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.764 qpair failed and we were unable to recover it. 00:26:42.764 [2024-04-26 15:03:25.305849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.306072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.306100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.764 qpair failed and we were unable to recover it. 00:26:42.764 [2024-04-26 15:03:25.306438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.306816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.306852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.764 qpair failed and we were unable to recover it. 00:26:42.764 [2024-04-26 15:03:25.307212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.307540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.307567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.764 qpair failed and we were unable to recover it. 00:26:42.764 [2024-04-26 15:03:25.308009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.308369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.308396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.764 qpair failed and we were unable to recover it. 00:26:42.764 [2024-04-26 15:03:25.308788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.309127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.309155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.764 qpair failed and we were unable to recover it. 00:26:42.764 [2024-04-26 15:03:25.309402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.309794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.309820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.764 qpair failed and we were unable to recover it. 00:26:42.764 [2024-04-26 15:03:25.310216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.310464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.310494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.764 qpair failed and we were unable to recover it. 00:26:42.764 [2024-04-26 15:03:25.310849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.311226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.311253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.764 qpair failed and we were unable to recover it. 00:26:42.764 [2024-04-26 15:03:25.311576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.311921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.311948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.764 qpair failed and we were unable to recover it. 00:26:42.764 [2024-04-26 15:03:25.312335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.312713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.312739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.764 qpair failed and we were unable to recover it. 00:26:42.764 [2024-04-26 15:03:25.313109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.313484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.313511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.764 qpair failed and we were unable to recover it. 00:26:42.764 [2024-04-26 15:03:25.313865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.314241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.314267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.764 qpair failed and we were unable to recover it. 00:26:42.764 [2024-04-26 15:03:25.314636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.314995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.315023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.764 qpair failed and we were unable to recover it. 00:26:42.764 [2024-04-26 15:03:25.315394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.315756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.315785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.764 qpair failed and we were unable to recover it. 00:26:42.764 [2024-04-26 15:03:25.316048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.316433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.316461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.764 qpair failed and we were unable to recover it. 00:26:42.764 [2024-04-26 15:03:25.316708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.317075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.317104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.764 qpair failed and we were unable to recover it. 00:26:42.764 [2024-04-26 15:03:25.317475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.317694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.317720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.764 qpair failed and we were unable to recover it. 00:26:42.764 [2024-04-26 15:03:25.318104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.318346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.318375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.764 qpair failed and we were unable to recover it. 00:26:42.764 [2024-04-26 15:03:25.318656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.319067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.319095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.764 qpair failed and we were unable to recover it. 00:26:42.764 [2024-04-26 15:03:25.319216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.319572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.319600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.764 qpair failed and we were unable to recover it. 00:26:42.764 [2024-04-26 15:03:25.319965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.320303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.320330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.764 qpair failed and we were unable to recover it. 00:26:42.764 [2024-04-26 15:03:25.320697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.321051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.321080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.764 qpair failed and we were unable to recover it. 00:26:42.764 [2024-04-26 15:03:25.321334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.321704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.321730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.764 qpair failed and we were unable to recover it. 00:26:42.764 [2024-04-26 15:03:25.322062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.764 [2024-04-26 15:03:25.322410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.322436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.322800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.323200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.323228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.323595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.323848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.323876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.324258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.324608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.324635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.325002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.325350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.325377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.325789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.326139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.326168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.326458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.326862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.326890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.327274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.327518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.327544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.327910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.328155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.328181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.328565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.328903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.328932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.329277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.329551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.329578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.329941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.330303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.330330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.330697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.331033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.331060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.331441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.331786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.331814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.332206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.332534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.332562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.332712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.332934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.332963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.333327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.333699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.333731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.334032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.334392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.334420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.334676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.335029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.335057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.335442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.335799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.335825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.336055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.336449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.336475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.336671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.337012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.337041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.337296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.337652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.337679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.338007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.338354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.338381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.338744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.338964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.338992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.339405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.339632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.339666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.339892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.340253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.340287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.340665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.341026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.341055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.341286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.341396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.341425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.341760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.342127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.342155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.765 qpair failed and we were unable to recover it. 00:26:42.765 [2024-04-26 15:03:25.342522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.342886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.765 [2024-04-26 15:03:25.342914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.343280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.343551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.343578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.343936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.344302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.344328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.344664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.345035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.345064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.345422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.345786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.345812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.346156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.346524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.346551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.346859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.347102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.347134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.347471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.347833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.347869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.348263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.348556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.348581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.348963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.349201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.349230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.349618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.349986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.350014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.350380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.350747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.350774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.351121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.351470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.351497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.351967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.352318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.352344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.352580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.352832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.352872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.353248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.353600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.353626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.353996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.354250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.354285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.354679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.355043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.355071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.355421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.355785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.355811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.356231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.356594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.356621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.356982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.357332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.357359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.357715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.358000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.358028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.358276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.358623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.358649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.358901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.359297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.359324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.359730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.360100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.360127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.360502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.360831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.360865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.361214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.361576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.361602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.361960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.362331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.362358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.362720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.363077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.363105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.363444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.363804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.363831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.364175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.364510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.364538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.766 [2024-04-26 15:03:25.364873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.365115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.766 [2024-04-26 15:03:25.365144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.766 qpair failed and we were unable to recover it. 00:26:42.767 [2024-04-26 15:03:25.365513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.365858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.365887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.767 qpair failed and we were unable to recover it. 00:26:42.767 [2024-04-26 15:03:25.366238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.366606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.366633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.767 qpair failed and we were unable to recover it. 00:26:42.767 [2024-04-26 15:03:25.367045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.367384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.367411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.767 qpair failed and we were unable to recover it. 00:26:42.767 [2024-04-26 15:03:25.367829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.368220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.368247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.767 qpair failed and we were unable to recover it. 00:26:42.767 [2024-04-26 15:03:25.368662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.368992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.369020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.767 qpair failed and we were unable to recover it. 00:26:42.767 [2024-04-26 15:03:25.369405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.369763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.369790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.767 qpair failed and we were unable to recover it. 00:26:42.767 [2024-04-26 15:03:25.370158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.370519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.370546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.767 qpair failed and we were unable to recover it. 00:26:42.767 [2024-04-26 15:03:25.370884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.371255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.371281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.767 qpair failed and we were unable to recover it. 00:26:42.767 [2024-04-26 15:03:25.371628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.371978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.372006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.767 qpair failed and we were unable to recover it. 00:26:42.767 [2024-04-26 15:03:25.372294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.372630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.372656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.767 qpair failed and we were unable to recover it. 00:26:42.767 [2024-04-26 15:03:25.372998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.373375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.373402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.767 qpair failed and we were unable to recover it. 00:26:42.767 [2024-04-26 15:03:25.373746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.374002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.374030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.767 qpair failed and we were unable to recover it. 00:26:42.767 [2024-04-26 15:03:25.374385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.374731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.374757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.767 qpair failed and we were unable to recover it. 00:26:42.767 [2024-04-26 15:03:25.375115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.375459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.375487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.767 qpair failed and we were unable to recover it. 00:26:42.767 [2024-04-26 15:03:25.375891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.376302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.376328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.767 qpair failed and we were unable to recover it. 00:26:42.767 [2024-04-26 15:03:25.376715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.377081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.377109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.767 qpair failed and we were unable to recover it. 00:26:42.767 [2024-04-26 15:03:25.377490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.377857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.377885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.767 qpair failed and we were unable to recover it. 00:26:42.767 [2024-04-26 15:03:25.378284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.378639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.378665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.767 qpair failed and we were unable to recover it. 00:26:42.767 [2024-04-26 15:03:25.379034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.379393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.379420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.767 qpair failed and we were unable to recover it. 00:26:42.767 [2024-04-26 15:03:25.379776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.380164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.767 [2024-04-26 15:03:25.380191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:42.767 qpair failed and we were unable to recover it. 00:26:43.033 [2024-04-26 15:03:25.380559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.033 [2024-04-26 15:03:25.380685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.033 [2024-04-26 15:03:25.380716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.033 qpair failed and we were unable to recover it. 00:26:43.033 [2024-04-26 15:03:25.381087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.033 [2024-04-26 15:03:25.381320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.033 [2024-04-26 15:03:25.381347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.033 qpair failed and we were unable to recover it. 00:26:43.033 [2024-04-26 15:03:25.381716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.033 [2024-04-26 15:03:25.382081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.033 [2024-04-26 15:03:25.382109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.033 qpair failed and we were unable to recover it. 00:26:43.033 [2024-04-26 15:03:25.382489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.033 [2024-04-26 15:03:25.382858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.033 [2024-04-26 15:03:25.382887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.033 qpair failed and we were unable to recover it. 00:26:43.033 [2024-04-26 15:03:25.383151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.033 [2024-04-26 15:03:25.383402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.033 [2024-04-26 15:03:25.383431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.033 qpair failed and we were unable to recover it. 00:26:43.033 [2024-04-26 15:03:25.383861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.033 [2024-04-26 15:03:25.384228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.033 [2024-04-26 15:03:25.384255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.033 qpair failed and we were unable to recover it. 00:26:43.033 [2024-04-26 15:03:25.384487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.033 [2024-04-26 15:03:25.384743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.033 [2024-04-26 15:03:25.384770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.033 qpair failed and we were unable to recover it. 00:26:43.033 [2024-04-26 15:03:25.385165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.033 [2024-04-26 15:03:25.385537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.033 [2024-04-26 15:03:25.385564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.033 qpair failed and we were unable to recover it. 00:26:43.033 [2024-04-26 15:03:25.385938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.033 [2024-04-26 15:03:25.386275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.033 [2024-04-26 15:03:25.386301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.033 qpair failed and we were unable to recover it. 00:26:43.033 [2024-04-26 15:03:25.386670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.033 [2024-04-26 15:03:25.387032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.033 [2024-04-26 15:03:25.387059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.033 qpair failed and we were unable to recover it. 00:26:43.033 [2024-04-26 15:03:25.387433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.033 [2024-04-26 15:03:25.387774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.033 [2024-04-26 15:03:25.387801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.033 qpair failed and we were unable to recover it. 00:26:43.033 [2024-04-26 15:03:25.388173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.033 [2024-04-26 15:03:25.388464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.033 [2024-04-26 15:03:25.388490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.388874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.389255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.389283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.389660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.390029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.390057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.390429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.390816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.390852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.391135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.391536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.391564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.391932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.392383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.392409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.392771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.393140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.393169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.393540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.393900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.393928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.394268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.394620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.394647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.395018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.395428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.395455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.395835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.396260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.396287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.396654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.397023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.397051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.397421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.397803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.397829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.398201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.398520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.398547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.399001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.399280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.399306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.399657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.400004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.400031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.400381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.400721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.400748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.400988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.401346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.401373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.401732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.402066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.402094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.402417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.402773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.402799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.403161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.403423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.403448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.403803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.404214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.404242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.404601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.404970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.405000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.405266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.405651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.405677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.406071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.406464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.406490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.406855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.407131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.407158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.407521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.407766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.407795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.408178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.408541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.408568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.408953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.409315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.034 [2024-04-26 15:03:25.409344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.034 qpair failed and we were unable to recover it. 00:26:43.034 [2024-04-26 15:03:25.409714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.410076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.410104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.410499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.410744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.410770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.411187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.411519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.411545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.411979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.412353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.412380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.412764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.413111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.413139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.413484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.413856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.413885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.414241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.414595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.414622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.414980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.415345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.415373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.415618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.415952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.415979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.416410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.416773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.416801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.417148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.417474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.417500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.417621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.417934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.417963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.418356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.418600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.418626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.418972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.419327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.419354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.419693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.420090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.420117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.420473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.420884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.420914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.421168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.421548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.421575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.421924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.422306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.422333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.422639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.423047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.423074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.423419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.423834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.423886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.424270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.424636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.424663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.425057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.425427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.425454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.425805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.426187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.426215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.426583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.426956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.426985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.427372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.427650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.427677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.428004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.428360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.428388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.428764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.429139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.429168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.429603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.429982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.430011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.035 qpair failed and we were unable to recover it. 00:26:43.035 [2024-04-26 15:03:25.430357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.430694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.035 [2024-04-26 15:03:25.430721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.431090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.431443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.431470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.431706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.432050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.432078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.432442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.432808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.432835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.433122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.433499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.433526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.433773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.434142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.434170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.434541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.434906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.434935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.435241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.435463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.435493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.435852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.436200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.436226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.436595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.436972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.437001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.437400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.437780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.437807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.438227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.438691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.438718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.439089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.439456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.439484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.439870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.440206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.440233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.440612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.440943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.440972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.441218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.441588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.441614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.441973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.442351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.442377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.442739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.443080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.443114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.443544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.443784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.443813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.444197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.444537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.444564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.444927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.445299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.445325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.445703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.446074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.446104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.446470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.446830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.446867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.447281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.447644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.447670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.447982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.448337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.448363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.448612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.448966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.448994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.449374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.449736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.449764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.450118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.450464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.450497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.450870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.451138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.036 [2024-04-26 15:03:25.451168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.036 qpair failed and we were unable to recover it. 00:26:43.036 [2024-04-26 15:03:25.451427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.451630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.451658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.452037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.452417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.452443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.452742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.453102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.453131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.453504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.453872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.453901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.454232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.454603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.454631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.455007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.455248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.455277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.455618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.456063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.456091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.456436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.456810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.456856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.457233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.457479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.457511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.457897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.458255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.458282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.458620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.458999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.459027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.459339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.459703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.459729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.460112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.460476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.460504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.460877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.461123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.461149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.461533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.461881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.461909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.462285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.462538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.462567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.462941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.463317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.463345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.463629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.463993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.464022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.464384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.464813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.464865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.465231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.465595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.465622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.465883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.466253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.466280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.466615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.466946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.466973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.467338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.467681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.467709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.468066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.468309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.468339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.468698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.469046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.469074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.469448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.469853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.469881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.470257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.470625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.470651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.471034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.471425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.471451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.037 qpair failed and we were unable to recover it. 00:26:43.037 [2024-04-26 15:03:25.471688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.037 [2024-04-26 15:03:25.471964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.471992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.038 qpair failed and we were unable to recover it. 00:26:43.038 [2024-04-26 15:03:25.472371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.472723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.472749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.038 qpair failed and we were unable to recover it. 00:26:43.038 [2024-04-26 15:03:25.473101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.473462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.473489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.038 qpair failed and we were unable to recover it. 00:26:43.038 [2024-04-26 15:03:25.473867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.474287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.474313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.038 qpair failed and we were unable to recover it. 00:26:43.038 [2024-04-26 15:03:25.474681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.475062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.475090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.038 qpair failed and we were unable to recover it. 00:26:43.038 [2024-04-26 15:03:25.475469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.475860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.475889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.038 qpair failed and we were unable to recover it. 00:26:43.038 [2024-04-26 15:03:25.476240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.476480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.476509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.038 qpair failed and we were unable to recover it. 00:26:43.038 [2024-04-26 15:03:25.476770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.477131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.477159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.038 qpair failed and we were unable to recover it. 00:26:43.038 [2024-04-26 15:03:25.477528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.477894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.477922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.038 qpair failed and we were unable to recover it. 00:26:43.038 [2024-04-26 15:03:25.478299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.478669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.478696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.038 qpair failed and we were unable to recover it. 00:26:43.038 [2024-04-26 15:03:25.479054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.479306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.479335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.038 qpair failed and we were unable to recover it. 00:26:43.038 [2024-04-26 15:03:25.479703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.480080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.480109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.038 qpair failed and we were unable to recover it. 00:26:43.038 [2024-04-26 15:03:25.480275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.480699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.480726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.038 qpair failed and we were unable to recover it. 00:26:43.038 [2024-04-26 15:03:25.480998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.481387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.481414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.038 qpair failed and we were unable to recover it. 00:26:43.038 [2024-04-26 15:03:25.481778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.482147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.482176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.038 qpair failed and we were unable to recover it. 00:26:43.038 [2024-04-26 15:03:25.482342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.482704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.482732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.038 qpair failed and we were unable to recover it. 00:26:43.038 [2024-04-26 15:03:25.483062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.483423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.483450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.038 qpair failed and we were unable to recover it. 00:26:43.038 [2024-04-26 15:03:25.483755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.484126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.484155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.038 qpair failed and we were unable to recover it. 00:26:43.038 [2024-04-26 15:03:25.484564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.484913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.484941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.038 qpair failed and we were unable to recover it. 00:26:43.038 [2024-04-26 15:03:25.485317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.485681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.485708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.038 qpair failed and we were unable to recover it. 00:26:43.038 [2024-04-26 15:03:25.485965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.486326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.486353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.038 qpair failed and we were unable to recover it. 00:26:43.038 [2024-04-26 15:03:25.486771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.487096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.487124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.038 qpair failed and we were unable to recover it. 00:26:43.038 [2024-04-26 15:03:25.487527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.487865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.487893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.038 qpair failed and we were unable to recover it. 00:26:43.038 [2024-04-26 15:03:25.488242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.038 [2024-04-26 15:03:25.488610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.488636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.488995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.489256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.489282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.489670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.490021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.490049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.490484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.490856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.490884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.491240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.491589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.491617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.491978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.492345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.492372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.492740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.493079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.493107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.493365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.493706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.493732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.494080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.494446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.494473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.494856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.495209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.495236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.495600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.495853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.495882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.496160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.496397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.496427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.496795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.497176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.497205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.497561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.497970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.497999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.498362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.498591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.498618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.498987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.499359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.499386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.499765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.500010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.500039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.500415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.500787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.500813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.501275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.501619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.501645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.502073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.502435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.502462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.502831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.503194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.503221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.503483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.503862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.503889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.504271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.504613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.504640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.505017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.505366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.505392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.505744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.506109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.506138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.506519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.506886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.506914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.507267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.507649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.507675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.508061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.508447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.508474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.039 [2024-04-26 15:03:25.508872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.509249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.039 [2024-04-26 15:03:25.509275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.039 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.509635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.509900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.509927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.510316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.510685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.510712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.511081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.511423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.511449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.511829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.512076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.512105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.512453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.512786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.512813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.513178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.513530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.513556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.513916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.514296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.514323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.514701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.515122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.515151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.515397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.515778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.515806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.516196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.516546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.516573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.517009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.517361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.517388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.517766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.518005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.518035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.518434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.518777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.518804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.519089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.519455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.519482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.519861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.520250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.520278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.520535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.520879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.520907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.521257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.521509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.521536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.521963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.522333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.522360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.522706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.523069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.523098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.523475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.523851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.523879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.524231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.524575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.524602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.524968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.525317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.525343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.525735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.526099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.526128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.526502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.526855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.526882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.527134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.527369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.527398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.527756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.528141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.528170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.528478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.528849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.528877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.529235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.529597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.529624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.040 qpair failed and we were unable to recover it. 00:26:43.040 [2024-04-26 15:03:25.529887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.040 [2024-04-26 15:03:25.530310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.530337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.530704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.531037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.531065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.531425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.531793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.531819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.532214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.532570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.532597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.532948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.533318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.533345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.533697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.534063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.534091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.534329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.534689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.534716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.535002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.535398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.535425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.535787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.536149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.536178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.536559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.536940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.536967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.537331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.537696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.537723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.538086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.538462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.538489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.538783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.539202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.539229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.539599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.539973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.540004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.540369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.540738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.540764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.541144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.541385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.541411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.541795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.542184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.542212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.542453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.542859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.542888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.543245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.543505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.543535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.543828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.544210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.544236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.544595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.544915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.544944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.545192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.545556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.545584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.545859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.546241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.546269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.546718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.547072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.547100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.547472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.547808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.547836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.548216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.548557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.548583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.549009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.549376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.549404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.549779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.550053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.550082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.550487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.550832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.041 [2024-04-26 15:03:25.550870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.041 qpair failed and we were unable to recover it. 00:26:43.041 [2024-04-26 15:03:25.551174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.551557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.551584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.042 qpair failed and we were unable to recover it. 00:26:43.042 [2024-04-26 15:03:25.551984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.552351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.552378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.042 qpair failed and we were unable to recover it. 00:26:43.042 [2024-04-26 15:03:25.552542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.552940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.552969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.042 qpair failed and we were unable to recover it. 00:26:43.042 [2024-04-26 15:03:25.553347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.553731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.553757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.042 qpair failed and we were unable to recover it. 00:26:43.042 [2024-04-26 15:03:25.554136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.554537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.554563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.042 qpair failed and we were unable to recover it. 00:26:43.042 [2024-04-26 15:03:25.554926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.555308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.555335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.042 qpair failed and we were unable to recover it. 00:26:43.042 [2024-04-26 15:03:25.555793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.556140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.556168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.042 qpair failed and we were unable to recover it. 00:26:43.042 [2024-04-26 15:03:25.556420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.556777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.556805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.042 qpair failed and we were unable to recover it. 00:26:43.042 [2024-04-26 15:03:25.557193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.557566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.557592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.042 qpair failed and we were unable to recover it. 00:26:43.042 [2024-04-26 15:03:25.557964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.558308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.558335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.042 qpair failed and we were unable to recover it. 00:26:43.042 [2024-04-26 15:03:25.558714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.559135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.559162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.042 qpair failed and we were unable to recover it. 00:26:43.042 [2024-04-26 15:03:25.559505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.559630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.559659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.042 qpair failed and we were unable to recover it. 00:26:43.042 [2024-04-26 15:03:25.560080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.560477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.560509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.042 qpair failed and we were unable to recover it. 00:26:43.042 [2024-04-26 15:03:25.560772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.561192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.561220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.042 qpair failed and we were unable to recover it. 00:26:43.042 [2024-04-26 15:03:25.561599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.561948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.561976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.042 qpair failed and we were unable to recover it. 00:26:43.042 [2024-04-26 15:03:25.562378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.562622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.562648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.042 qpair failed and we were unable to recover it. 00:26:43.042 [2024-04-26 15:03:25.563031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.563458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.563485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.042 qpair failed and we were unable to recover it. 00:26:43.042 [2024-04-26 15:03:25.563745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.564113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.564140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.042 qpair failed and we were unable to recover it. 00:26:43.042 [2024-04-26 15:03:25.564530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.564914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.564943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.042 qpair failed and we were unable to recover it. 00:26:43.042 [2024-04-26 15:03:25.565325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.565694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.565721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.042 qpair failed and we were unable to recover it. 00:26:43.042 [2024-04-26 15:03:25.566038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.566413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.566440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.042 qpair failed and we were unable to recover it. 00:26:43.042 [2024-04-26 15:03:25.566748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.567089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.567117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.042 qpair failed and we were unable to recover it. 00:26:43.042 [2024-04-26 15:03:25.567472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.567730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.567761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.042 qpair failed and we were unable to recover it. 00:26:43.042 [2024-04-26 15:03:25.568153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.042 [2024-04-26 15:03:25.568401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.568429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.568700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.569082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.569111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.569490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.569864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.569893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.570276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.570645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.570672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.571024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.571400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.571427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.571796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.572168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.572196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.572332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.572627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.572655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.572985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.573373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.573400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.573771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.574215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.574244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.574623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.575035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.575069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.575413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.575849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.575877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.576241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.576485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.576512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.576885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.577228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.577256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.577566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.577955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.577983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.578336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.578713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.578740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.579100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.579402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.579429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.579828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.580206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.580234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.580604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.580719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.580746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.581112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.581473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.581500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.581944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.582300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.582333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.582719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.582994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.583021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.583398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.583766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.583795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.584154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.584499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.584525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.584885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.585167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.585193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.585535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.585898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.585927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.586270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.586517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.586546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.586944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.587321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.587349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.587715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.588084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.588111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.043 qpair failed and we were unable to recover it. 00:26:43.043 [2024-04-26 15:03:25.588497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.588723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.043 [2024-04-26 15:03:25.588751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.589122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.589467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.589493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.589848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.590237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.590264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.590520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.590898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.590926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.591377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.591708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.591734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.591973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.592377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.592403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.592760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.593112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.593140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.593526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.593936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.593964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.594339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.594709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.594735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.595106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.595466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.595492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.595855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.596257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.596283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.596625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.596850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.596879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.597299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.597687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.597714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.598083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.598436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.598463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.598851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.599231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.599266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.599613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.599975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.600004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.600268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.600613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.600639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.600986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.601372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.601399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.601746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.602121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.602149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.602529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.602907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.602935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.603272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.603616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.603643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.603993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.604365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.604391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.604737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.605049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.605076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.605444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.605675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.605705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.606090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.606427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.606453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.606687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.607043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.607070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.607438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.607803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.607828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.608083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.608316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.608344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.608753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.609119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.609148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.044 qpair failed and we were unable to recover it. 00:26:43.044 [2024-04-26 15:03:25.609514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.044 [2024-04-26 15:03:25.609864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.609892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.610277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.610647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.610674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.610986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.611329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.611356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.611709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.612057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.612085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.612332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.612597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.612627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.612951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.613205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.613235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.613612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.613995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.614023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.614394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.614759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.614785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.615149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.615554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.615581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.615945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.616316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.616344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.616724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.617128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.617156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.617543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.617909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.617938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.618321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.618574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.618602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.618994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.619340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.619367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.619705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.620086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.620114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.620476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.620850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.620879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.621249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.621616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.621642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.622000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.622386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.622412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.622788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.623153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.623181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.623539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.623777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.623806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.624166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.624513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.624539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.624896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.625260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.625286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.625671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.626035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.626064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.626428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.626794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.626820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.627193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.627527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.627553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.627950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.628307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.628333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.628721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.629093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.629120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.629492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.629871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.629900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.630199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.630558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.045 [2024-04-26 15:03:25.630585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.045 qpair failed and we were unable to recover it. 00:26:43.045 [2024-04-26 15:03:25.630943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.631309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.631336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.631710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.632038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.632066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.632460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.632830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.632883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.633118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.633505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.633532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.633893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.634220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.634247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.634582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.634944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.634972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.635331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.635713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.635740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.636129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.636471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.636497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.636850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.636987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.637015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.637393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.637635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.637672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.638056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.638398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.638424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.638801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.639019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.639047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.639419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.639799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.639825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.640229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.640582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.640609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.640983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.641393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.641420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.641777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.642215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.642243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.642619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.642967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.642995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.643466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.643693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.643721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.644171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.644511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.644538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.644853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.645276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.645302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.645658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.646027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.646057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.646440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.646783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.646809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.647244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.647605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.647632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.647979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.648366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.648393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.648776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.649132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.649161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.649412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.649755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.649781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.650180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.650551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.650579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.650818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.651077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.651105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.046 qpair failed and we were unable to recover it. 00:26:43.046 [2024-04-26 15:03:25.651505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.651849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.046 [2024-04-26 15:03:25.651878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.047 qpair failed and we were unable to recover it. 00:26:43.047 [2024-04-26 15:03:25.652129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.652514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.652541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.047 qpair failed and we were unable to recover it. 00:26:43.047 [2024-04-26 15:03:25.652892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.653283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.653311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.047 qpair failed and we were unable to recover it. 00:26:43.047 [2024-04-26 15:03:25.653684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.654047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.654076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.047 qpair failed and we were unable to recover it. 00:26:43.047 [2024-04-26 15:03:25.654339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.654724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.654750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.047 qpair failed and we were unable to recover it. 00:26:43.047 [2024-04-26 15:03:25.655097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.655435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.655462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.047 qpair failed and we were unable to recover it. 00:26:43.047 [2024-04-26 15:03:25.655856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.656219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.656246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.047 qpair failed and we were unable to recover it. 00:26:43.047 [2024-04-26 15:03:25.656493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.656859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.656888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.047 qpair failed and we were unable to recover it. 00:26:43.047 [2024-04-26 15:03:25.657158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.657503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.657530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.047 qpair failed and we were unable to recover it. 00:26:43.047 [2024-04-26 15:03:25.657908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.658275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.658301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.047 qpair failed and we were unable to recover it. 00:26:43.047 [2024-04-26 15:03:25.658680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.659030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.659058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.047 qpair failed and we were unable to recover it. 00:26:43.047 [2024-04-26 15:03:25.659510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.659886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.659914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.047 qpair failed and we were unable to recover it. 00:26:43.047 [2024-04-26 15:03:25.660317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.660660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.660686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.047 qpair failed and we were unable to recover it. 00:26:43.047 [2024-04-26 15:03:25.661042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.661415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.661442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.047 qpair failed and we were unable to recover it. 00:26:43.047 [2024-04-26 15:03:25.661880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.662279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.662305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.047 qpair failed and we were unable to recover it. 00:26:43.047 [2024-04-26 15:03:25.662685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.663047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.663074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.047 qpair failed and we were unable to recover it. 00:26:43.047 [2024-04-26 15:03:25.663431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.663662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.663692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.047 qpair failed and we were unable to recover it. 00:26:43.047 [2024-04-26 15:03:25.664128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.664471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.664498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.047 qpair failed and we were unable to recover it. 00:26:43.047 [2024-04-26 15:03:25.664751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.665126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.665155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.047 qpair failed and we were unable to recover it. 00:26:43.047 [2024-04-26 15:03:25.665537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.665815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.665863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.047 qpair failed and we were unable to recover it. 00:26:43.047 [2024-04-26 15:03:25.666261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.666602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.666630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.047 qpair failed and we were unable to recover it. 00:26:43.047 [2024-04-26 15:03:25.666992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.667362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.667388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.047 qpair failed and we were unable to recover it. 00:26:43.047 [2024-04-26 15:03:25.667747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.668120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.668148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.047 qpair failed and we were unable to recover it. 00:26:43.047 [2024-04-26 15:03:25.668516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.668900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.668927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.047 qpair failed and we were unable to recover it. 00:26:43.047 [2024-04-26 15:03:25.669161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.669554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.669581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.047 qpair failed and we were unable to recover it. 00:26:43.047 [2024-04-26 15:03:25.669952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.047 [2024-04-26 15:03:25.670342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.670368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.670629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.671019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.671048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.671390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.671793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.671819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.672109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.672477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.672504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.672869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.673208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.673235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.673655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.674032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.674060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.674425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.674769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.674795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.675160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.675536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.675563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.675935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.676301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.676328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.676696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.677068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.677096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.677471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.677811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.677851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.678228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.678577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.678609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.678983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.679359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.679385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.679767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.680135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.680162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.680458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.680852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.680880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.681294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.681660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.681687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.682077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.682492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.682518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.682870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.683252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.683278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.683589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.683943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.683971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.684327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.684694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.684720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.684985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.685226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.685252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.685609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.685988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.686021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.686398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.686768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.686795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.687184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.687516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.687542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.687919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.688295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.688323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.688684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.689049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.689077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.689445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.689680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.689708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.689991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.690381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.690408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.048 [2024-04-26 15:03:25.690667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.691011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.048 [2024-04-26 15:03:25.691039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.048 qpair failed and we were unable to recover it. 00:26:43.049 [2024-04-26 15:03:25.691299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.049 [2024-04-26 15:03:25.691635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.049 [2024-04-26 15:03:25.691662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.049 qpair failed and we were unable to recover it. 00:26:43.049 [2024-04-26 15:03:25.691935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.049 [2024-04-26 15:03:25.692193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.049 [2024-04-26 15:03:25.692219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.049 qpair failed and we were unable to recover it. 00:26:43.049 [2024-04-26 15:03:25.692595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.049 [2024-04-26 15:03:25.692931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.049 [2024-04-26 15:03:25.692966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.049 qpair failed and we were unable to recover it. 00:26:43.049 [2024-04-26 15:03:25.693316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.693673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.693703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.318 qpair failed and we were unable to recover it. 00:26:43.318 [2024-04-26 15:03:25.693943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.694286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.694313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.318 qpair failed and we were unable to recover it. 00:26:43.318 [2024-04-26 15:03:25.694709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.695094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.695123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.318 qpair failed and we were unable to recover it. 00:26:43.318 [2024-04-26 15:03:25.695555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.695900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.695928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.318 qpair failed and we were unable to recover it. 00:26:43.318 [2024-04-26 15:03:25.696300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.696664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.696691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.318 qpair failed and we were unable to recover it. 00:26:43.318 [2024-04-26 15:03:25.697058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.697412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.697440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.318 qpair failed and we were unable to recover it. 00:26:43.318 [2024-04-26 15:03:25.697810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.698057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.698089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.318 qpair failed and we were unable to recover it. 00:26:43.318 [2024-04-26 15:03:25.698374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.698635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.698662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.318 qpair failed and we were unable to recover it. 00:26:43.318 [2024-04-26 15:03:25.699038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.699406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.699433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.318 qpair failed and we were unable to recover it. 00:26:43.318 [2024-04-26 15:03:25.699811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.700150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.700177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.318 qpair failed and we were unable to recover it. 00:26:43.318 [2024-04-26 15:03:25.700436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.700813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.700849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.318 qpair failed and we were unable to recover it. 00:26:43.318 [2024-04-26 15:03:25.701075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.701443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.701471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.318 qpair failed and we were unable to recover it. 00:26:43.318 [2024-04-26 15:03:25.701833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.702221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.702248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.318 qpair failed and we were unable to recover it. 00:26:43.318 [2024-04-26 15:03:25.702646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.702977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.703004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.318 qpair failed and we were unable to recover it. 00:26:43.318 [2024-04-26 15:03:25.703356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.703675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.703701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.318 qpair failed and we were unable to recover it. 00:26:43.318 [2024-04-26 15:03:25.704086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.704446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.704474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.318 qpair failed and we were unable to recover it. 00:26:43.318 [2024-04-26 15:03:25.704860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.705230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.705256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.318 qpair failed and we were unable to recover it. 00:26:43.318 [2024-04-26 15:03:25.705637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.706015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.706042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.318 qpair failed and we were unable to recover it. 00:26:43.318 [2024-04-26 15:03:25.706416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.706732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.706759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.318 qpair failed and we were unable to recover it. 00:26:43.318 [2024-04-26 15:03:25.707109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.707451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.707477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.318 qpair failed and we were unable to recover it. 00:26:43.318 [2024-04-26 15:03:25.707830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.708199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.708226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.318 qpair failed and we were unable to recover it. 00:26:43.318 [2024-04-26 15:03:25.708583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.708891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.708919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.318 qpair failed and we were unable to recover it. 00:26:43.318 [2024-04-26 15:03:25.709182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.709378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.709406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.318 qpair failed and we were unable to recover it. 00:26:43.318 [2024-04-26 15:03:25.709794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.710158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.710185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.318 qpair failed and we were unable to recover it. 00:26:43.318 [2024-04-26 15:03:25.710524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.710772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.710801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.318 qpair failed and we were unable to recover it. 00:26:43.318 [2024-04-26 15:03:25.711159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.711572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.318 [2024-04-26 15:03:25.711598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.318 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.711969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.712324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.712351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.712707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.713051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.713079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.713519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.713854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.713882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.714141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.714400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.714427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.714834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.715235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.715264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.715645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.716011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.716040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.716267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.716616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.716645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.717013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.717366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.717395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.717750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.718017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.718049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.718385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.718736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.718764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.719112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.719354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.719382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.719734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.720104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.720134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.720505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.720883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.720910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.721310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.721651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.721678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.722044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.722305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.722334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.722684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.723020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.723049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.723424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.723796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.723823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.724182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.724501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.724529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.724899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.725197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.725228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.725504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.725861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.725891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.726266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.726641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.726669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.727058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.727431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.727460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.727855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.728222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.728251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.728611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.728955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.728984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.729441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.729566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.729595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.730008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.730361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.730389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.730748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.731119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.731147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.731533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.731897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.731924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.319 qpair failed and we were unable to recover it. 00:26:43.319 [2024-04-26 15:03:25.732312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.319 [2024-04-26 15:03:25.732675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.732703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.733086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.733450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.733478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.733881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.734249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.734277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.734533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.734898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.734926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.735303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.735665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.735691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.736040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.736272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.736298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.736672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.737117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.737145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.737397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.737777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.737804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.738198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.738560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.738588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.738941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.739285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.739312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.739659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.740028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.740059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.740497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.740870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.740899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.741265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.741636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.741663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.742056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.742360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.742386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.742656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.743050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.743078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.743508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.743854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.743882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.744252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.744613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.744639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.744898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.745245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.745272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.745643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.746036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.746066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.746445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.746799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.746826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.747194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.747617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.747644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.747910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.748292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.748318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.748697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.749033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.749060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.749422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.749792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.749819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.750206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.750568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.750595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.750961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.751333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.751360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.751732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.752043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.752070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.752429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.752794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.752821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.320 qpair failed and we were unable to recover it. 00:26:43.320 [2024-04-26 15:03:25.753269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.320 [2024-04-26 15:03:25.753631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.753659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.753906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.754274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.754301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.754669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.755058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.755086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.755463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.755830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.755865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.756230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.756479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.756509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.756895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.757287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.757314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.757688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.758017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.758046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.758422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.758791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.758817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.759191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.759588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.759616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.760017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.760242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.760270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.760653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.760988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.761016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.761377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.761733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.761760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.762124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.762491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.762518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.762777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.763119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.763147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.763524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.763894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.763923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.764303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.764691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.764718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.765085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.765459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.765486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.765857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.766185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.766211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.766470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.766836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.766875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.767247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.767616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.767642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.768019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.768385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.768411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.768771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.769179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.769207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.769573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.769940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.769969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.770273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.770612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.770639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.771001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.771346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.771373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.771726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.772106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.772134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.772390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.772763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.772789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.773184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.773546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.773573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.773950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.774315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.321 [2024-04-26 15:03:25.774343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.321 qpair failed and we were unable to recover it. 00:26:43.321 [2024-04-26 15:03:25.774754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.775089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.775116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.322 qpair failed and we were unable to recover it. 00:26:43.322 [2024-04-26 15:03:25.775489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.775854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.775882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.322 qpair failed and we were unable to recover it. 00:26:43.322 [2024-04-26 15:03:25.776231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.776569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.776595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.322 qpair failed and we were unable to recover it. 00:26:43.322 [2024-04-26 15:03:25.776970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.777340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.777367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.322 qpair failed and we were unable to recover it. 00:26:43.322 [2024-04-26 15:03:25.777625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.777996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.778024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.322 qpair failed and we were unable to recover it. 00:26:43.322 [2024-04-26 15:03:25.778391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.778759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.778785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.322 qpair failed and we were unable to recover it. 00:26:43.322 [2024-04-26 15:03:25.779160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.779541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.779567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.322 qpair failed and we were unable to recover it. 00:26:43.322 [2024-04-26 15:03:25.780011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.780379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.780406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.322 qpair failed and we were unable to recover it. 00:26:43.322 [2024-04-26 15:03:25.780787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.781157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.781186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.322 qpair failed and we were unable to recover it. 00:26:43.322 [2024-04-26 15:03:25.781562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.781895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.781923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.322 qpair failed and we were unable to recover it. 00:26:43.322 [2024-04-26 15:03:25.782301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.782673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.782699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.322 qpair failed and we were unable to recover it. 00:26:43.322 [2024-04-26 15:03:25.783083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.783439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.783465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.322 qpair failed and we were unable to recover it. 00:26:43.322 [2024-04-26 15:03:25.783825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.784168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.784196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.322 qpair failed and we were unable to recover it. 00:26:43.322 [2024-04-26 15:03:25.784523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.784889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.784917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.322 qpair failed and we were unable to recover it. 00:26:43.322 [2024-04-26 15:03:25.785271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.785494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.785523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.322 qpair failed and we were unable to recover it. 00:26:43.322 [2024-04-26 15:03:25.785984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.786371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.786397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.322 qpair failed and we were unable to recover it. 00:26:43.322 [2024-04-26 15:03:25.786772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.787186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.787213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.322 qpair failed and we were unable to recover it. 00:26:43.322 [2024-04-26 15:03:25.787580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.787910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.787938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.322 qpair failed and we were unable to recover it. 00:26:43.322 [2024-04-26 15:03:25.788372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.788626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.788654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.322 qpair failed and we were unable to recover it. 00:26:43.322 [2024-04-26 15:03:25.789029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.789418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.789451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.322 qpair failed and we were unable to recover it. 00:26:43.322 [2024-04-26 15:03:25.789829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.790120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.790148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.322 qpair failed and we were unable to recover it. 00:26:43.322 [2024-04-26 15:03:25.790525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.790894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.790923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.322 qpair failed and we were unable to recover it. 00:26:43.322 [2024-04-26 15:03:25.791325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.791693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.791720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.322 qpair failed and we were unable to recover it. 00:26:43.322 [2024-04-26 15:03:25.792109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.322 [2024-04-26 15:03:25.792368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.792396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.323 qpair failed and we were unable to recover it. 00:26:43.323 [2024-04-26 15:03:25.792625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.792988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.793018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.323 qpair failed and we were unable to recover it. 00:26:43.323 [2024-04-26 15:03:25.793409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.793773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.793801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.323 qpair failed and we were unable to recover it. 00:26:43.323 [2024-04-26 15:03:25.794184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.794549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.794576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.323 qpair failed and we were unable to recover it. 00:26:43.323 [2024-04-26 15:03:25.794946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.795324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.795352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.323 qpair failed and we were unable to recover it. 00:26:43.323 [2024-04-26 15:03:25.795745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.796111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.796139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.323 qpair failed and we were unable to recover it. 00:26:43.323 [2024-04-26 15:03:25.796505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.796756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.796790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.323 qpair failed and we were unable to recover it. 00:26:43.323 [2024-04-26 15:03:25.796981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.797262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.797289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.323 qpair failed and we were unable to recover it. 00:26:43.323 [2024-04-26 15:03:25.797663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.798102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.798130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.323 qpair failed and we were unable to recover it. 00:26:43.323 [2024-04-26 15:03:25.798515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.798852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.798882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.323 qpair failed and we were unable to recover it. 00:26:43.323 [2024-04-26 15:03:25.799153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.799441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.799469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.323 qpair failed and we were unable to recover it. 00:26:43.323 [2024-04-26 15:03:25.799671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.799819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.799856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.323 qpair failed and we were unable to recover it. 00:26:43.323 [2024-04-26 15:03:25.800141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.800379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.800406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.323 qpair failed and we were unable to recover it. 00:26:43.323 [2024-04-26 15:03:25.800744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.801033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.801061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.323 qpair failed and we were unable to recover it. 00:26:43.323 [2024-04-26 15:03:25.801296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.801669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.801697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.323 qpair failed and we were unable to recover it. 00:26:43.323 [2024-04-26 15:03:25.801959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.802314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.802340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.323 qpair failed and we were unable to recover it. 00:26:43.323 [2024-04-26 15:03:25.802790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.803012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.803047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.323 qpair failed and we were unable to recover it. 00:26:43.323 [2024-04-26 15:03:25.803421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.803658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.803688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.323 qpair failed and we were unable to recover it. 00:26:43.323 [2024-04-26 15:03:25.804046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.804406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.804432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.323 qpair failed and we were unable to recover it. 00:26:43.323 [2024-04-26 15:03:25.804665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.805124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.805151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.323 qpair failed and we were unable to recover it. 00:26:43.323 [2024-04-26 15:03:25.805530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.805891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.805919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.323 qpair failed and we were unable to recover it. 00:26:43.323 [2024-04-26 15:03:25.806169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.806419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.806450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.323 qpair failed and we were unable to recover it. 00:26:43.323 [2024-04-26 15:03:25.806802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.807209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.323 [2024-04-26 15:03:25.807237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.807503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.807871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.807900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.808290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.808544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.808573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.808917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.809279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.809305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.809757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.810113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.810148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.810416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.810790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.810817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.811100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.811477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.811503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.811674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.811986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.812014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.812401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.812760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.812787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.813204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.813569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.813597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.813972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.814350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.814376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.814767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.815137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.815172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.815537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.815920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.815948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.816315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.816566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.816592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.816980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.817231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.817264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.817646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.817920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.817950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.818345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.818717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.818743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.819183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.819536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.819562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.819812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.820249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.820277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.820551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.820898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.820926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.821350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.821728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.821756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.822107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.822345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.822374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.822639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.822980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.823008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.823367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.823711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.823737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.824097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.824471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.824497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.824901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.825185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.825212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.825609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.825982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.826009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.826379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.826731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.826758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.827125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.827384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.324 [2024-04-26 15:03:25.827410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.324 qpair failed and we were unable to recover it. 00:26:43.324 [2024-04-26 15:03:25.827788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.828058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.828085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.325 qpair failed and we were unable to recover it. 00:26:43.325 [2024-04-26 15:03:25.828353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.828727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.828753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.325 qpair failed and we were unable to recover it. 00:26:43.325 [2024-04-26 15:03:25.829012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.829419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.829445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.325 qpair failed and we were unable to recover it. 00:26:43.325 [2024-04-26 15:03:25.829823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.830204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.830232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.325 qpair failed and we were unable to recover it. 00:26:43.325 [2024-04-26 15:03:25.830612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.830883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.830912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.325 qpair failed and we were unable to recover it. 00:26:43.325 [2024-04-26 15:03:25.831203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.831449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.831476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.325 qpair failed and we were unable to recover it. 00:26:43.325 [2024-04-26 15:03:25.831912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.832279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.832305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.325 qpair failed and we were unable to recover it. 00:26:43.325 [2024-04-26 15:03:25.832676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.833030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.833057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.325 qpair failed and we were unable to recover it. 00:26:43.325 [2024-04-26 15:03:25.833430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.833807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.833834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.325 qpair failed and we were unable to recover it. 00:26:43.325 [2024-04-26 15:03:25.834247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.834477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.834502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.325 qpair failed and we were unable to recover it. 00:26:43.325 [2024-04-26 15:03:25.834649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.834885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.834914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.325 qpair failed and we were unable to recover it. 00:26:43.325 [2024-04-26 15:03:25.835276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.835638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.835665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.325 qpair failed and we were unable to recover it. 00:26:43.325 [2024-04-26 15:03:25.836037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.836405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.836432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.325 qpair failed and we were unable to recover it. 00:26:43.325 [2024-04-26 15:03:25.836809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.837262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.837290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.325 qpair failed and we were unable to recover it. 00:26:43.325 [2024-04-26 15:03:25.837691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.838071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.838101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.325 qpair failed and we were unable to recover it. 00:26:43.325 [2024-04-26 15:03:25.838478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.838859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.838888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.325 qpair failed and we were unable to recover it. 00:26:43.325 [2024-04-26 15:03:25.839148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.839517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.839543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.325 qpair failed and we were unable to recover it. 00:26:43.325 [2024-04-26 15:03:25.839922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.840175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.840207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.325 qpair failed and we were unable to recover it. 00:26:43.325 [2024-04-26 15:03:25.840521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.840864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.840892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.325 qpair failed and we were unable to recover it. 00:26:43.325 [2024-04-26 15:03:25.841268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.841528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.841554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.325 qpair failed and we were unable to recover it. 00:26:43.325 [2024-04-26 15:03:25.841930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.842284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.842311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.325 qpair failed and we were unable to recover it. 00:26:43.325 [2024-04-26 15:03:25.842657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.843017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.843045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.325 qpair failed and we were unable to recover it. 00:26:43.325 [2024-04-26 15:03:25.843414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.843675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.843701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.325 qpair failed and we were unable to recover it. 00:26:43.325 [2024-04-26 15:03:25.844083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.844425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.844451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.325 qpair failed and we were unable to recover it. 00:26:43.325 [2024-04-26 15:03:25.844800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.845158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.325 [2024-04-26 15:03:25.845185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.326 qpair failed and we were unable to recover it. 00:26:43.326 [2024-04-26 15:03:25.845560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.845930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.845958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.326 qpair failed and we were unable to recover it. 00:26:43.326 [2024-04-26 15:03:25.846386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.846767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.846793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.326 qpair failed and we were unable to recover it. 00:26:43.326 [2024-04-26 15:03:25.847135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.847503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.847529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.326 qpair failed and we were unable to recover it. 00:26:43.326 [2024-04-26 15:03:25.847897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.848254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.848281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.326 qpair failed and we were unable to recover it. 00:26:43.326 [2024-04-26 15:03:25.848661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.849011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.849039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.326 qpair failed and we were unable to recover it. 00:26:43.326 [2024-04-26 15:03:25.849397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.849762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.849788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.326 qpair failed and we were unable to recover it. 00:26:43.326 [2024-04-26 15:03:25.850148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.850526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.850553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.326 qpair failed and we were unable to recover it. 00:26:43.326 [2024-04-26 15:03:25.850912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.851291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.851318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.326 qpair failed and we were unable to recover it. 00:26:43.326 [2024-04-26 15:03:25.851559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.851925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.851953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.326 qpair failed and we were unable to recover it. 00:26:43.326 [2024-04-26 15:03:25.852333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.852660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.852687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.326 qpair failed and we were unable to recover it. 00:26:43.326 [2024-04-26 15:03:25.853165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.853500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.853526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.326 qpair failed and we were unable to recover it. 00:26:43.326 [2024-04-26 15:03:25.853765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.854177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.854205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.326 qpair failed and we were unable to recover it. 00:26:43.326 [2024-04-26 15:03:25.854575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.854812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.854849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.326 qpair failed and we were unable to recover it. 00:26:43.326 [2024-04-26 15:03:25.855224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.855564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.855590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.326 qpair failed and we were unable to recover it. 00:26:43.326 [2024-04-26 15:03:25.855948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.856204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.856231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.326 qpair failed and we were unable to recover it. 00:26:43.326 [2024-04-26 15:03:25.856611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.856952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.856979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.326 qpair failed and we were unable to recover it. 00:26:43.326 [2024-04-26 15:03:25.857323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.857575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.857606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.326 qpair failed and we were unable to recover it. 00:26:43.326 [2024-04-26 15:03:25.857986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.858322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.858349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.326 qpair failed and we were unable to recover it. 00:26:43.326 [2024-04-26 15:03:25.858720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.859086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.859115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.326 qpair failed and we were unable to recover it. 00:26:43.326 [2024-04-26 15:03:25.859500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.859896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.859925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.326 qpair failed and we were unable to recover it. 00:26:43.326 [2024-04-26 15:03:25.860299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.860661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.860687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.326 qpair failed and we were unable to recover it. 00:26:43.326 [2024-04-26 15:03:25.861052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.861441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.861468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.326 qpair failed and we were unable to recover it. 00:26:43.326 [2024-04-26 15:03:25.861852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.862236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.862263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.326 qpair failed and we were unable to recover it. 00:26:43.326 [2024-04-26 15:03:25.862626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.862990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.863018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.326 qpair failed and we were unable to recover it. 00:26:43.326 [2024-04-26 15:03:25.863390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.863757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.863784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.326 qpair failed and we were unable to recover it. 00:26:43.326 [2024-04-26 15:03:25.863994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.864241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.864268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.326 qpair failed and we were unable to recover it. 00:26:43.326 [2024-04-26 15:03:25.864589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.326 [2024-04-26 15:03:25.864936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.864964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.327 qpair failed and we were unable to recover it. 00:26:43.327 [2024-04-26 15:03:25.865356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.865731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.865758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.327 qpair failed and we were unable to recover it. 00:26:43.327 [2024-04-26 15:03:25.866020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.866388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.866415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.327 qpair failed and we were unable to recover it. 00:26:43.327 [2024-04-26 15:03:25.866664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.867030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.867058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.327 qpair failed and we were unable to recover it. 00:26:43.327 [2024-04-26 15:03:25.867319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.867650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.867677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.327 qpair failed and we were unable to recover it. 00:26:43.327 [2024-04-26 15:03:25.868030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.868364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.868391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.327 qpair failed and we were unable to recover it. 00:26:43.327 [2024-04-26 15:03:25.868768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.868999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.869030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.327 qpair failed and we were unable to recover it. 00:26:43.327 [2024-04-26 15:03:25.869423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.869829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.869876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.327 qpair failed and we were unable to recover it. 00:26:43.327 [2024-04-26 15:03:25.870263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.870637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.870663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.327 qpair failed and we were unable to recover it. 00:26:43.327 [2024-04-26 15:03:25.871044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.871470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.871496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.327 qpair failed and we were unable to recover it. 00:26:43.327 [2024-04-26 15:03:25.871850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.872222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.872249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.327 qpair failed and we were unable to recover it. 00:26:43.327 [2024-04-26 15:03:25.872598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.872854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.872885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.327 qpair failed and we were unable to recover it. 00:26:43.327 [2024-04-26 15:03:25.873275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.873640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.873667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.327 qpair failed and we were unable to recover it. 00:26:43.327 [2024-04-26 15:03:25.874062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.874415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.874442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.327 qpair failed and we were unable to recover it. 00:26:43.327 [2024-04-26 15:03:25.874819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.875183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.875210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.327 qpair failed and we were unable to recover it. 00:26:43.327 [2024-04-26 15:03:25.875584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.875958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.875986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.327 qpair failed and we were unable to recover it. 00:26:43.327 [2024-04-26 15:03:25.876359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.876732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.876759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.327 qpair failed and we were unable to recover it. 00:26:43.327 [2024-04-26 15:03:25.877057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.877411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.877438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.327 qpair failed and we were unable to recover it. 00:26:43.327 [2024-04-26 15:03:25.877808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.878205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.878233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.327 qpair failed and we were unable to recover it. 00:26:43.327 [2024-04-26 15:03:25.878573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.878936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.878965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.327 qpair failed and we were unable to recover it. 00:26:43.327 [2024-04-26 15:03:25.879349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.879702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.327 [2024-04-26 15:03:25.879730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.328 qpair failed and we were unable to recover it. 00:26:43.328 [2024-04-26 15:03:25.880102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.880476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.880503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.328 qpair failed and we were unable to recover it. 00:26:43.328 [2024-04-26 15:03:25.880873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.881221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.881247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.328 qpair failed and we were unable to recover it. 00:26:43.328 [2024-04-26 15:03:25.881616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.881886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.881913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.328 qpair failed and we were unable to recover it. 00:26:43.328 [2024-04-26 15:03:25.882273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.882615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.882642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.328 qpair failed and we were unable to recover it. 00:26:43.328 [2024-04-26 15:03:25.882898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.883284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.883312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.328 qpair failed and we were unable to recover it. 00:26:43.328 [2024-04-26 15:03:25.883563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.883910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.883937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.328 qpair failed and we were unable to recover it. 00:26:43.328 [2024-04-26 15:03:25.884194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.884525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.884552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.328 qpair failed and we were unable to recover it. 00:26:43.328 [2024-04-26 15:03:25.884931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.885306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.885332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.328 qpair failed and we were unable to recover it. 00:26:43.328 [2024-04-26 15:03:25.885720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.886078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.886106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.328 qpair failed and we were unable to recover it. 00:26:43.328 [2024-04-26 15:03:25.886484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.886859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.886887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.328 qpair failed and we were unable to recover it. 00:26:43.328 [2024-04-26 15:03:25.887253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.887621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.887648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.328 qpair failed and we were unable to recover it. 00:26:43.328 [2024-04-26 15:03:25.887966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.888310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.888336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.328 qpair failed and we were unable to recover it. 00:26:43.328 [2024-04-26 15:03:25.888690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.889043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.889071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.328 qpair failed and we were unable to recover it. 00:26:43.328 [2024-04-26 15:03:25.889449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.889816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.889857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.328 qpair failed and we were unable to recover it. 00:26:43.328 [2024-04-26 15:03:25.890237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.890589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.890616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.328 qpair failed and we were unable to recover it. 00:26:43.328 [2024-04-26 15:03:25.891005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.891389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.891416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.328 qpair failed and we were unable to recover it. 00:26:43.328 [2024-04-26 15:03:25.891758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.892141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.892169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.328 qpair failed and we were unable to recover it. 00:26:43.328 [2024-04-26 15:03:25.892545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.892784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.892813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.328 qpair failed and we were unable to recover it. 00:26:43.328 [2024-04-26 15:03:25.893210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.893560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.893587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.328 qpair failed and we were unable to recover it. 00:26:43.328 [2024-04-26 15:03:25.893939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.894188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.894216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.328 qpair failed and we were unable to recover it. 00:26:43.328 [2024-04-26 15:03:25.894579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.894960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.894988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.328 qpair failed and we were unable to recover it. 00:26:43.328 [2024-04-26 15:03:25.895363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.895732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.895759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.328 qpair failed and we were unable to recover it. 00:26:43.328 [2024-04-26 15:03:25.896116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.896481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.896508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.328 qpair failed and we were unable to recover it. 00:26:43.328 [2024-04-26 15:03:25.896882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.897229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.897255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.328 qpair failed and we were unable to recover it. 00:26:43.328 [2024-04-26 15:03:25.897616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.328 [2024-04-26 15:03:25.898084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.898112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.329 [2024-04-26 15:03:25.898463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.898826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.898864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.329 [2024-04-26 15:03:25.899238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.899624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.899650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.329 [2024-04-26 15:03:25.900007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.900343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.900371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.329 [2024-04-26 15:03:25.900729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.901149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.901176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.329 [2024-04-26 15:03:25.901552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.901912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.901940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.329 [2024-04-26 15:03:25.902316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.902681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.902708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.329 [2024-04-26 15:03:25.902972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.903342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.903369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.329 [2024-04-26 15:03:25.903730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.904111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.904139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.329 [2024-04-26 15:03:25.904506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.904873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.904902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.329 [2024-04-26 15:03:25.905268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.905633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.905665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.329 [2024-04-26 15:03:25.906032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.906383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.906409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.329 [2024-04-26 15:03:25.906770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.907135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.907163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.329 [2024-04-26 15:03:25.907499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.907745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.907773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.329 [2024-04-26 15:03:25.908115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.908473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.908499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.329 [2024-04-26 15:03:25.908887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.909285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.909313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.329 [2024-04-26 15:03:25.909695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.910062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.910090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.329 [2024-04-26 15:03:25.910447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.910795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.910821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.329 [2024-04-26 15:03:25.911191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.911540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.911568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.329 [2024-04-26 15:03:25.911959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.912344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.912371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.329 [2024-04-26 15:03:25.912729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.913109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.913143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.329 [2024-04-26 15:03:25.913523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.913886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.913913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.329 [2024-04-26 15:03:25.914343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.914693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.914720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.329 [2024-04-26 15:03:25.915108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.915471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.915506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.329 [2024-04-26 15:03:25.915885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.916250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.916277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.329 [2024-04-26 15:03:25.916643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.916986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.917014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.329 [2024-04-26 15:03:25.917286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.917680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.917707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.329 [2024-04-26 15:03:25.917965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.918287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.329 [2024-04-26 15:03:25.918315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.329 qpair failed and we were unable to recover it. 00:26:43.330 [2024-04-26 15:03:25.918738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.919092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.919120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.330 qpair failed and we were unable to recover it. 00:26:43.330 [2024-04-26 15:03:25.919467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.919849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.919878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.330 qpair failed and we were unable to recover it. 00:26:43.330 [2024-04-26 15:03:25.920217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.920581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.920613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.330 qpair failed and we were unable to recover it. 00:26:43.330 [2024-04-26 15:03:25.920971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.921333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.921360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.330 qpair failed and we were unable to recover it. 00:26:43.330 [2024-04-26 15:03:25.921754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.922120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.922147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.330 qpair failed and we were unable to recover it. 00:26:43.330 [2024-04-26 15:03:25.922493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.922827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.922865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.330 qpair failed and we were unable to recover it. 00:26:43.330 [2024-04-26 15:03:25.923293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.923637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.923664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.330 qpair failed and we were unable to recover it. 00:26:43.330 [2024-04-26 15:03:25.924038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.924371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.924399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.330 qpair failed and we were unable to recover it. 00:26:43.330 [2024-04-26 15:03:25.924755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.925127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.925156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.330 qpair failed and we were unable to recover it. 00:26:43.330 [2024-04-26 15:03:25.925409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.925656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.925683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.330 qpair failed and we were unable to recover it. 00:26:43.330 [2024-04-26 15:03:25.926064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.926406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.926432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.330 qpair failed and we were unable to recover it. 00:26:43.330 [2024-04-26 15:03:25.926877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.927252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.927278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.330 qpair failed and we were unable to recover it. 00:26:43.330 [2024-04-26 15:03:25.927543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.927910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.927945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.330 qpair failed and we were unable to recover it. 00:26:43.330 [2024-04-26 15:03:25.928302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.928664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.928690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.330 qpair failed and we were unable to recover it. 00:26:43.330 [2024-04-26 15:03:25.929083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.929473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.929500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.330 qpair failed and we were unable to recover it. 00:26:43.330 [2024-04-26 15:03:25.929762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.930103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.930131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.330 qpair failed and we were unable to recover it. 00:26:43.330 [2024-04-26 15:03:25.930493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.930860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.930889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.330 qpair failed and we were unable to recover it. 00:26:43.330 [2024-04-26 15:03:25.931255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.931606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.931632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.330 qpair failed and we were unable to recover it. 00:26:43.330 [2024-04-26 15:03:25.931983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.932347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.932374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.330 qpair failed and we were unable to recover it. 00:26:43.330 [2024-04-26 15:03:25.932635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.932974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.933002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.330 qpair failed and we were unable to recover it. 00:26:43.330 [2024-04-26 15:03:25.933378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.933756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.933782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.330 qpair failed and we were unable to recover it. 00:26:43.330 [2024-04-26 15:03:25.934140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.934490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.330 [2024-04-26 15:03:25.934517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.330 qpair failed and we were unable to recover it. 00:26:43.330 [2024-04-26 15:03:25.934955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.935327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.935353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.935729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.935971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.936001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.936376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.936745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.936772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.937015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.937165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.937193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.937613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.937977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.938005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.938384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.938756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.938783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.939136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.939469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.939495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.939886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.940124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.940165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.940584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.940966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.940994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.941351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.941713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.941739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.942128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.942493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.942521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.942902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.943268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.943294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.943652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.943889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.943919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.944265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.944651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.944678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.945067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.945415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.945442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.945708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.946095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.946124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.946584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.946906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.946934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.947307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.947659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.947685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.947968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.948357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.948383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.948766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.949046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.949073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.949415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.949775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.949801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.950198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.950417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.950445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.950851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.951232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.951258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.951616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.951975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.952003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.952309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.952674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.952700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.953094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.953459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.953486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.953872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.954179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.954205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.954559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.954944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.954972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.331 qpair failed and we were unable to recover it. 00:26:43.331 [2024-04-26 15:03:25.955359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.955718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.331 [2024-04-26 15:03:25.955746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.332 qpair failed and we were unable to recover it. 00:26:43.332 [2024-04-26 15:03:25.956109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.956353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.956384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.332 qpair failed and we were unable to recover it. 00:26:43.332 [2024-04-26 15:03:25.956774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.957111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.957140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.332 qpair failed and we were unable to recover it. 00:26:43.332 [2024-04-26 15:03:25.957493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.957853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.957881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.332 qpair failed and we were unable to recover it. 00:26:43.332 [2024-04-26 15:03:25.958223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.958585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.958612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.332 qpair failed and we were unable to recover it. 00:26:43.332 [2024-04-26 15:03:25.958975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.959340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.959367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.332 qpair failed and we were unable to recover it. 00:26:43.332 [2024-04-26 15:03:25.959699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.960049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.960077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.332 qpair failed and we were unable to recover it. 00:26:43.332 [2024-04-26 15:03:25.960424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.960808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.960834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.332 qpair failed and we were unable to recover it. 00:26:43.332 [2024-04-26 15:03:25.961219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.961583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.961610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.332 qpair failed and we were unable to recover it. 00:26:43.332 [2024-04-26 15:03:25.961864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.962207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.962234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.332 qpair failed and we were unable to recover it. 00:26:43.332 [2024-04-26 15:03:25.962598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.962962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.962991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.332 qpair failed and we were unable to recover it. 00:26:43.332 [2024-04-26 15:03:25.963339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.963584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.963613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.332 qpair failed and we were unable to recover it. 00:26:43.332 [2024-04-26 15:03:25.964000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.964341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.964368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.332 qpair failed and we were unable to recover it. 00:26:43.332 [2024-04-26 15:03:25.964619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.964918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.964946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.332 qpair failed and we were unable to recover it. 00:26:43.332 [2024-04-26 15:03:25.965337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.965695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.965722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.332 qpair failed and we were unable to recover it. 00:26:43.332 [2024-04-26 15:03:25.966090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.966435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.966462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.332 qpair failed and we were unable to recover it. 00:26:43.332 [2024-04-26 15:03:25.966886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.967265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.967293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.332 qpair failed and we were unable to recover it. 00:26:43.332 [2024-04-26 15:03:25.967653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.968016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.968044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.332 qpair failed and we were unable to recover it. 00:26:43.332 [2024-04-26 15:03:25.968292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.968670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.968697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.332 qpair failed and we were unable to recover it. 00:26:43.332 [2024-04-26 15:03:25.969061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.969439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.969466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.332 qpair failed and we were unable to recover it. 00:26:43.332 [2024-04-26 15:03:25.969859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.970199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.970225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.332 qpair failed and we were unable to recover it. 00:26:43.332 [2024-04-26 15:03:25.970603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.970978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.971007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.332 qpair failed and we were unable to recover it. 00:26:43.332 [2024-04-26 15:03:25.971390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.971755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.332 [2024-04-26 15:03:25.971781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.333 qpair failed and we were unable to recover it. 00:26:43.333 [2024-04-26 15:03:25.972173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.333 [2024-04-26 15:03:25.972535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.333 [2024-04-26 15:03:25.972563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.333 qpair failed and we were unable to recover it. 00:26:43.333 [2024-04-26 15:03:25.972825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.333 [2024-04-26 15:03:25.973203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.333 [2024-04-26 15:03:25.973230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.333 qpair failed and we were unable to recover it. 00:26:43.333 [2024-04-26 15:03:25.973482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.333 [2024-04-26 15:03:25.973854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.333 [2024-04-26 15:03:25.973882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.333 qpair failed and we were unable to recover it. 00:26:43.333 [2024-04-26 15:03:25.974152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.333 [2024-04-26 15:03:25.974527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.333 [2024-04-26 15:03:25.974554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.333 qpair failed and we were unable to recover it. 00:26:43.605 [2024-04-26 15:03:25.974916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.975289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.975318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-04-26 15:03:25.975697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.976043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.976071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-04-26 15:03:25.976488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.976873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.976902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-04-26 15:03:25.977271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.977642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.977668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-04-26 15:03:25.978039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.978416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.978443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-04-26 15:03:25.978823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.979194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.979221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-04-26 15:03:25.979597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.979964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.979993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-04-26 15:03:25.980363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.980715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.980742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-04-26 15:03:25.981125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.981489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.981516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-04-26 15:03:25.981900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.982267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.982295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-04-26 15:03:25.982671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.983034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.983061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-04-26 15:03:25.983423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.983791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.983818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-04-26 15:03:25.984281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.984641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.984668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-04-26 15:03:25.985080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.985428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.985455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-04-26 15:03:25.985826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.986228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.986254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-04-26 15:03:25.986621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.987002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.605 [2024-04-26 15:03:25.987029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.605 qpair failed and we were unable to recover it. 00:26:43.605 [2024-04-26 15:03:25.987410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.987780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.987806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-04-26 15:03:25.988153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.988457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.988483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-04-26 15:03:25.988866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.989216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.989243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-04-26 15:03:25.989610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.989980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.990008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-04-26 15:03:25.990361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.990703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.990729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-04-26 15:03:25.991091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.991335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.991364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-04-26 15:03:25.991731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.991975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.992001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-04-26 15:03:25.992315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.992659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.992686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-04-26 15:03:25.993046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.993422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.993449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-04-26 15:03:25.993832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.994113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.994141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-04-26 15:03:25.994520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.994915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.994944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-04-26 15:03:25.995296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.995648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.995675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-04-26 15:03:25.996052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.996403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.996429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-04-26 15:03:25.996674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.997061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.997090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-04-26 15:03:25.997466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.997832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.997867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-04-26 15:03:25.998149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.998516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.998543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-04-26 15:03:25.998920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.999183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.999211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-04-26 15:03:25.999577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.999918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:25.999946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-04-26 15:03:26.000304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:26.000670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:26.000697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-04-26 15:03:26.000955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:26.001312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:26.001339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-04-26 15:03:26.001675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:26.002054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:26.002083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-04-26 15:03:26.002406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:26.002767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:26.002794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-04-26 15:03:26.003177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:26.003527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:26.003554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.606 qpair failed and we were unable to recover it. 00:26:43.606 [2024-04-26 15:03:26.003948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:26.004290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.606 [2024-04-26 15:03:26.004317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-04-26 15:03:26.004744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.004996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.005028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-04-26 15:03:26.005405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.005762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.005788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-04-26 15:03:26.006136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.006464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.006490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-04-26 15:03:26.006882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.007140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.007170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-04-26 15:03:26.007553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.007894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.007922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-04-26 15:03:26.008302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.008663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.008690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-04-26 15:03:26.008950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.009318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.009345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-04-26 15:03:26.009721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.010069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.010097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-04-26 15:03:26.010467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.010832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.010868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-04-26 15:03:26.011219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.011583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.011610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-04-26 15:03:26.011986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.012356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.012383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-04-26 15:03:26.012764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.013117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.013146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-04-26 15:03:26.013465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.013862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.013890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-04-26 15:03:26.014285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.014649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.014675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-04-26 15:03:26.015033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.015390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.015418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-04-26 15:03:26.015785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.016157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.016185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-04-26 15:03:26.016568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.016864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.016894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-04-26 15:03:26.017272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.017604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.017630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-04-26 15:03:26.018014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.018369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.018395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-04-26 15:03:26.018740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.019118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.019146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-04-26 15:03:26.019549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.019890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.019918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-04-26 15:03:26.020255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.020641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.020667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.607 qpair failed and we were unable to recover it. 00:26:43.607 [2024-04-26 15:03:26.020980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.021369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.607 [2024-04-26 15:03:26.021395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-04-26 15:03:26.021773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.022110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.022137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-04-26 15:03:26.022492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.022869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.022897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-04-26 15:03:26.023294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.023571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.023598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-04-26 15:03:26.023960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.024323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.024356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-04-26 15:03:26.024750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.025018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.025050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-04-26 15:03:26.025476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.025821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.025856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-04-26 15:03:26.026139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.026479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.026506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-04-26 15:03:26.026887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.027284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.027310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-04-26 15:03:26.027685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.027932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.027966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-04-26 15:03:26.028339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.028553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.028582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-04-26 15:03:26.028978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.029268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.029294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-04-26 15:03:26.029679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.030021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.030050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-04-26 15:03:26.030370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.030738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.030765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-04-26 15:03:26.031215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.031455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.031495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-04-26 15:03:26.031848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.032214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.032240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-04-26 15:03:26.032609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.032991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.033020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-04-26 15:03:26.033374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.033757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.033783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-04-26 15:03:26.034171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.034403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.034432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-04-26 15:03:26.034821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.035201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.035228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-04-26 15:03:26.035607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.035973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.036001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-04-26 15:03:26.036391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.036761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.036786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-04-26 15:03:26.037149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.037517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.037543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.608 qpair failed and we were unable to recover it. 00:26:43.608 [2024-04-26 15:03:26.037934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.608 [2024-04-26 15:03:26.038301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.038328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-04-26 15:03:26.038584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.038931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.038966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-04-26 15:03:26.039351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.039722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.039749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-04-26 15:03:26.040109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.040550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.040580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-04-26 15:03:26.040971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.041228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.041256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-04-26 15:03:26.041632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.041981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.042008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-04-26 15:03:26.042370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.042623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.042650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-04-26 15:03:26.042993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.043355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.043382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-04-26 15:03:26.043628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.043972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.044000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-04-26 15:03:26.044372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.044729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.044755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-04-26 15:03:26.044974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.045354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.045381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-04-26 15:03:26.045635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.046009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.046037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-04-26 15:03:26.046317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.046563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.046591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-04-26 15:03:26.046935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.047307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.047334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-04-26 15:03:26.047596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.047957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.047985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-04-26 15:03:26.048347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.048639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.048665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-04-26 15:03:26.049054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.049300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.049326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-04-26 15:03:26.049720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.049923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.049950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-04-26 15:03:26.050360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.050597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.050625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-04-26 15:03:26.050973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.051347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.051374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-04-26 15:03:26.051638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.052022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.052051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-04-26 15:03:26.052433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.052808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.052835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.609 qpair failed and we were unable to recover it. 00:26:43.609 [2024-04-26 15:03:26.053242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.609 [2024-04-26 15:03:26.053609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.053635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-04-26 15:03:26.054020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.054274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.054300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-04-26 15:03:26.054562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.054929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.054959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-04-26 15:03:26.055331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.055554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.055584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-04-26 15:03:26.055959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.056327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.056353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-04-26 15:03:26.056732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.057086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.057113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-04-26 15:03:26.057365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.057629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.057655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-04-26 15:03:26.057915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.058310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.058337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-04-26 15:03:26.058731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.059074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.059102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-04-26 15:03:26.059481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.059868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.059896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-04-26 15:03:26.060304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.060672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.060699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-04-26 15:03:26.061087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.061335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.061363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-04-26 15:03:26.061564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.061936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.061965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-04-26 15:03:26.062197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.062414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.062440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-04-26 15:03:26.062867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.063238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.063265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-04-26 15:03:26.063650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.064016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.064043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-04-26 15:03:26.064435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.064791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.064819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-04-26 15:03:26.065239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.065609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.065637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-04-26 15:03:26.066023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.066266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.066292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-04-26 15:03:26.066657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.067049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.067077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.610 qpair failed and we were unable to recover it. 00:26:43.610 [2024-04-26 15:03:26.067513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.610 [2024-04-26 15:03:26.067886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.067915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-04-26 15:03:26.068299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.068669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.068697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-04-26 15:03:26.069050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.069302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.069328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-04-26 15:03:26.069716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.070072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.070100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-04-26 15:03:26.070519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.070886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.070914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-04-26 15:03:26.071292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.071654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.071682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-04-26 15:03:26.071886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.072234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.072261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-04-26 15:03:26.072519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.072901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.072930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-04-26 15:03:26.073301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.073559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.073585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-04-26 15:03:26.073854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.074103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.074130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-04-26 15:03:26.074367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.074807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.074835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-04-26 15:03:26.075268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.075625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.075652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-04-26 15:03:26.076014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.076389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.076416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-04-26 15:03:26.076673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.077033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.077061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-04-26 15:03:26.077429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.077804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.077830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-04-26 15:03:26.078201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.078570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.078597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-04-26 15:03:26.079021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.079381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.079408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-04-26 15:03:26.079858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.080106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.080132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-04-26 15:03:26.080402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.080780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.080808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-04-26 15:03:26.081206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.081431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.081457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.611 qpair failed and we were unable to recover it. 00:26:43.611 [2024-04-26 15:03:26.081868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.611 [2024-04-26 15:03:26.082252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.082279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-04-26 15:03:26.082650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.083017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.083045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-04-26 15:03:26.083422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.083698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.083724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-04-26 15:03:26.084080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.084390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.084418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-04-26 15:03:26.084673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.085053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.085081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-04-26 15:03:26.085445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.085819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.085859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-04-26 15:03:26.086216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.086458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.086488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-04-26 15:03:26.086859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.087090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.087117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-04-26 15:03:26.087569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.087696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.087721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-04-26 15:03:26.088121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.088343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.088369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-04-26 15:03:26.088775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.089122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.089151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-04-26 15:03:26.089505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.089870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.089898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-04-26 15:03:26.090292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.090673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.090699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-04-26 15:03:26.091080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.091430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.091457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-04-26 15:03:26.091897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.092347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.092374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-04-26 15:03:26.092729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.093101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.093130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-04-26 15:03:26.093516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.093756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.093782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-04-26 15:03:26.094136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.094474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.094501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-04-26 15:03:26.094769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.095169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.095197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-04-26 15:03:26.095385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.095763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.095789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-04-26 15:03:26.096183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.096451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.096478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-04-26 15:03:26.096874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.097267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.097295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-04-26 15:03:26.097649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.098029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.098057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.612 qpair failed and we were unable to recover it. 00:26:43.612 [2024-04-26 15:03:26.098429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.612 [2024-04-26 15:03:26.098668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.098698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-04-26 15:03:26.099081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.099449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.099476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-04-26 15:03:26.099868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.100242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.100268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-04-26 15:03:26.100612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.100966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.100995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-04-26 15:03:26.101364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.101726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.101752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-04-26 15:03:26.102164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.102530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.102556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-04-26 15:03:26.102927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.103308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.103336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-04-26 15:03:26.103695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.104066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.104094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-04-26 15:03:26.104450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.104830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.104874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-04-26 15:03:26.105263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.105607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.105634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-04-26 15:03:26.105983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.106327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.106353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-04-26 15:03:26.106751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.107130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.107157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-04-26 15:03:26.107597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.107937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.107965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-04-26 15:03:26.108394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.108631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.108657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-04-26 15:03:26.109061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.109424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.109451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-04-26 15:03:26.109786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.110129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.110157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-04-26 15:03:26.110499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.110884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.110912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-04-26 15:03:26.111284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.111657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.111684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-04-26 15:03:26.112065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.112408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.112434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-04-26 15:03:26.112797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.113077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.113105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-04-26 15:03:26.113516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.113896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.113925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-04-26 15:03:26.114308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.114654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.114683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-04-26 15:03:26.115042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.115451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.115477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-04-26 15:03:26.115770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.116113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.116142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.613 [2024-04-26 15:03:26.116512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.116875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.613 [2024-04-26 15:03:26.116903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.613 qpair failed and we were unable to recover it. 00:26:43.614 [2024-04-26 15:03:26.117224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.117574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.117600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-04-26 15:03:26.117955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.118322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.118348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-04-26 15:03:26.118713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.119080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.119108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-04-26 15:03:26.119451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.119820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.119858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-04-26 15:03:26.120206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.120576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.120603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-04-26 15:03:26.120958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.121210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.121239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-04-26 15:03:26.121599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.121968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.121998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-04-26 15:03:26.122375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.122726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.122754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-04-26 15:03:26.123112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.123488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.123515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-04-26 15:03:26.123897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.124261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.124287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-04-26 15:03:26.124646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.124899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.124925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-04-26 15:03:26.125298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.125641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.125668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-04-26 15:03:26.126049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.126400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.126427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-04-26 15:03:26.126788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.127142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.127170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-04-26 15:03:26.127542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.127706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.127736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-04-26 15:03:26.127992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.128236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.128263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-04-26 15:03:26.128607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.128971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.128999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-04-26 15:03:26.129241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.129574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.129601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-04-26 15:03:26.129984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.130363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.130390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-04-26 15:03:26.130757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.131108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.131136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-04-26 15:03:26.131591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.131933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.131961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-04-26 15:03:26.132342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.132707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.132734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-04-26 15:03:26.133123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.133492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.133528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.614 [2024-04-26 15:03:26.133889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.134218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.614 [2024-04-26 15:03:26.134245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.614 qpair failed and we were unable to recover it. 00:26:43.615 [2024-04-26 15:03:26.134707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.135041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.135069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-04-26 15:03:26.135440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.135821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.135862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-04-26 15:03:26.136244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.136473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.136501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-04-26 15:03:26.136875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.137242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.137269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-04-26 15:03:26.137710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.138158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.138186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-04-26 15:03:26.138553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.138946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.138973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-04-26 15:03:26.139350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.139689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.139715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-04-26 15:03:26.140070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.140438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.140469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-04-26 15:03:26.140692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.141027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.141061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-04-26 15:03:26.141418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.141785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.141813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-04-26 15:03:26.142178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.142522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.142549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-04-26 15:03:26.142961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.143217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.143250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-04-26 15:03:26.143627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.143971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.143999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-04-26 15:03:26.144451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.144822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.144871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-04-26 15:03:26.145298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.145642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.145668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-04-26 15:03:26.145913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.146287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.146315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-04-26 15:03:26.146703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.147155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.147184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-04-26 15:03:26.147523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.147896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.147925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-04-26 15:03:26.148317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.148680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.148713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-04-26 15:03:26.149068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.149388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.149415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-04-26 15:03:26.149789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.150169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.150198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-04-26 15:03:26.150575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.150937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.150966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-04-26 15:03:26.151324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.151688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.151715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.615 [2024-04-26 15:03:26.152097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.152449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.615 [2024-04-26 15:03:26.152476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.615 qpair failed and we were unable to recover it. 00:26:43.616 [2024-04-26 15:03:26.152824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.153284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.153312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.616 qpair failed and we were unable to recover it. 00:26:43.616 [2024-04-26 15:03:26.153755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.154182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.154210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.616 qpair failed and we were unable to recover it. 00:26:43.616 [2024-04-26 15:03:26.154564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.154859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.154889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.616 qpair failed and we were unable to recover it. 00:26:43.616 [2024-04-26 15:03:26.155259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.155534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.155560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.616 qpair failed and we were unable to recover it. 00:26:43.616 [2024-04-26 15:03:26.155798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.156155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.156189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.616 qpair failed and we were unable to recover it. 00:26:43.616 [2024-04-26 15:03:26.156537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.156901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.156930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.616 qpair failed and we were unable to recover it. 00:26:43.616 [2024-04-26 15:03:26.157301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.157625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.157652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.616 qpair failed and we were unable to recover it. 00:26:43.616 [2024-04-26 15:03:26.157925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.158326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.158355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.616 qpair failed and we were unable to recover it. 00:26:43.616 [2024-04-26 15:03:26.158623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.158874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.158906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.616 qpair failed and we were unable to recover it. 00:26:43.616 [2024-04-26 15:03:26.159350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.159687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.159713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.616 qpair failed and we were unable to recover it. 00:26:43.616 [2024-04-26 15:03:26.160056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.160411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.160438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.616 qpair failed and we were unable to recover it. 00:26:43.616 [2024-04-26 15:03:26.160707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.161047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.161076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.616 qpair failed and we were unable to recover it. 00:26:43.616 [2024-04-26 15:03:26.161434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.161656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.161686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.616 qpair failed and we were unable to recover it. 00:26:43.616 [2024-04-26 15:03:26.162075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.162433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.162460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.616 qpair failed and we were unable to recover it. 00:26:43.616 [2024-04-26 15:03:26.162853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.163156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.163184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.616 qpair failed and we were unable to recover it. 00:26:43.616 [2024-04-26 15:03:26.163551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.163893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.163922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.616 qpair failed and we were unable to recover it. 00:26:43.616 [2024-04-26 15:03:26.164317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.164683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.164710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.616 qpair failed and we were unable to recover it. 00:26:43.616 [2024-04-26 15:03:26.165029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.165410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.616 [2024-04-26 15:03:26.165437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.616 qpair failed and we were unable to recover it. 00:26:43.616 [2024-04-26 15:03:26.165788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.166132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.166160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-04-26 15:03:26.166493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.166862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.166890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-04-26 15:03:26.167322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.167731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.167757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-04-26 15:03:26.168142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.168516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.168543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-04-26 15:03:26.168766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.169143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.169172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-04-26 15:03:26.169546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.169918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.169947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-04-26 15:03:26.170326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.170688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.170716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-04-26 15:03:26.170977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.171340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.171367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-04-26 15:03:26.171733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.172091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.172120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-04-26 15:03:26.172499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.172859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.172888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-04-26 15:03:26.173263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.173633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.173660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-04-26 15:03:26.174034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.174389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.174416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-04-26 15:03:26.174755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.175095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.175124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-04-26 15:03:26.175491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.175855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.175884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-04-26 15:03:26.176134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.176363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.176390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-04-26 15:03:26.176780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.177200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.177228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-04-26 15:03:26.177600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.177972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.178002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-04-26 15:03:26.178395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.178756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.178783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-04-26 15:03:26.179160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.179526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.179554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-04-26 15:03:26.179925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.180148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.180177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-04-26 15:03:26.180586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.180974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.181001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-04-26 15:03:26.181368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.181703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.181729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.617 qpair failed and we were unable to recover it. 00:26:43.617 [2024-04-26 15:03:26.181997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.182387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.617 [2024-04-26 15:03:26.182413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-04-26 15:03:26.182789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.183138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.183166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-04-26 15:03:26.183570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.183934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.183963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-04-26 15:03:26.184358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.184822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.184862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-04-26 15:03:26.185211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.185566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.185596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-04-26 15:03:26.185923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.186302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.186331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-04-26 15:03:26.186744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.187123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.187151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-04-26 15:03:26.187512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.187858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.187887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-04-26 15:03:26.188148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.188402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.188432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-04-26 15:03:26.188717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.189084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.189112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-04-26 15:03:26.189481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.189861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.189891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-04-26 15:03:26.190123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.190397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.190424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-04-26 15:03:26.190822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.191217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.191244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-04-26 15:03:26.191596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.191992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.192020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-04-26 15:03:26.192420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.192765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.192792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-04-26 15:03:26.193208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.193453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.193479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-04-26 15:03:26.193748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.194137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.194165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-04-26 15:03:26.194536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.194892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.194920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-04-26 15:03:26.195348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.195693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.195719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-04-26 15:03:26.196139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.196501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.196529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-04-26 15:03:26.196911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.197267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.197294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-04-26 15:03:26.197661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.197892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.197922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.618 [2024-04-26 15:03:26.198252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.198622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.618 [2024-04-26 15:03:26.198649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.618 qpair failed and we were unable to recover it. 00:26:43.619 [2024-04-26 15:03:26.199028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.199407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.199435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-04-26 15:03:26.199822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.200198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.200226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-04-26 15:03:26.200589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.200973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.201000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-04-26 15:03:26.201388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.201760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.201787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-04-26 15:03:26.202161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.202386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.202415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-04-26 15:03:26.202827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.203205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.203233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-04-26 15:03:26.203600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.203932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.203963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-04-26 15:03:26.204336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.204693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.204721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-04-26 15:03:26.205079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.205431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.205458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-04-26 15:03:26.205847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.206232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.206259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-04-26 15:03:26.206634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.206871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.206901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-04-26 15:03:26.207293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.207670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.207697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-04-26 15:03:26.208088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.208453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.208480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-04-26 15:03:26.208873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.209259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.209287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-04-26 15:03:26.209661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.209909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.209938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-04-26 15:03:26.210311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.210648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.210676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-04-26 15:03:26.210938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.211316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.211345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-04-26 15:03:26.211740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.212154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.212184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-04-26 15:03:26.212539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.212879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.212908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-04-26 15:03:26.213263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.213626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.213653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-04-26 15:03:26.214030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.214393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.214419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-04-26 15:03:26.214788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.215138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.619 [2024-04-26 15:03:26.215166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.619 qpair failed and we were unable to recover it. 00:26:43.619 [2024-04-26 15:03:26.215479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.215836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.215885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-04-26 15:03:26.216263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.216633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.216661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-04-26 15:03:26.217016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.217384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.217413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-04-26 15:03:26.217776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.218184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.218213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-04-26 15:03:26.218598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.218747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.218775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-04-26 15:03:26.219213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.219566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.219593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-04-26 15:03:26.219956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.220298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.220325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-04-26 15:03:26.220672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.221014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.221043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-04-26 15:03:26.221341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.221720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.221747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-04-26 15:03:26.222096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.222468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.222495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-04-26 15:03:26.222854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.223231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.223259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-04-26 15:03:26.223500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.223934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.223963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-04-26 15:03:26.224314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.224673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.224699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-04-26 15:03:26.225094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.225454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.225481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-04-26 15:03:26.225825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.226203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.226231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-04-26 15:03:26.226643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.226890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.226919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-04-26 15:03:26.227303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.227644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.227671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-04-26 15:03:26.227965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.228341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.228368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-04-26 15:03:26.228717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.229085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.229113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-04-26 15:03:26.229489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.229849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.229877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-04-26 15:03:26.230222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.230564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.230591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-04-26 15:03:26.230932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.231183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.231212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-04-26 15:03:26.231585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.231820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.231861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.620 [2024-04-26 15:03:26.232217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.232449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.620 [2024-04-26 15:03:26.232475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.620 qpair failed and we were unable to recover it. 00:26:43.621 [2024-04-26 15:03:26.232866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.233291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.233318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-04-26 15:03:26.233666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.234038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.234066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-04-26 15:03:26.234432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.234819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.234854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-04-26 15:03:26.235034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.235411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.235438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-04-26 15:03:26.235822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.236221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.236250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-04-26 15:03:26.236637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.237017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.237046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-04-26 15:03:26.237432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.237816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.237853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-04-26 15:03:26.238228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.238607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.238633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-04-26 15:03:26.238988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.239241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.239267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-04-26 15:03:26.239637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.239979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.240007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-04-26 15:03:26.240388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.240756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.240783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-04-26 15:03:26.241211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.241601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.241627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-04-26 15:03:26.241981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.242345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.242372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-04-26 15:03:26.242727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.243079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.243106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-04-26 15:03:26.243521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.243876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.243907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-04-26 15:03:26.244276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.244630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.244657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-04-26 15:03:26.245030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.245422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.245450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-04-26 15:03:26.245822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.246226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.246256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-04-26 15:03:26.246627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.247011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.247039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-04-26 15:03:26.247422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.247769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.247796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-04-26 15:03:26.248221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.248594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.248621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-04-26 15:03:26.248978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.249354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.249381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.621 [2024-04-26 15:03:26.249760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.250093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.621 [2024-04-26 15:03:26.250120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.621 qpair failed and we were unable to recover it. 00:26:43.622 [2024-04-26 15:03:26.250493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-04-26 15:03:26.250868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-04-26 15:03:26.250896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-04-26 15:03:26.251278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-04-26 15:03:26.251619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-04-26 15:03:26.251645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-04-26 15:03:26.252006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-04-26 15:03:26.252367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-04-26 15:03:26.252394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-04-26 15:03:26.252770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-04-26 15:03:26.253150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-04-26 15:03:26.253184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-04-26 15:03:26.253423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-04-26 15:03:26.253815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-04-26 15:03:26.253872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-04-26 15:03:26.254235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-04-26 15:03:26.254622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-04-26 15:03:26.254649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-04-26 15:03:26.255010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-04-26 15:03:26.255260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-04-26 15:03:26.255291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-04-26 15:03:26.255645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-04-26 15:03:26.256048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-04-26 15:03:26.256078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-04-26 15:03:26.256528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-04-26 15:03:26.256859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-04-26 15:03:26.256889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-04-26 15:03:26.257258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-04-26 15:03:26.257540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-04-26 15:03:26.257566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-04-26 15:03:26.257821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-04-26 15:03:26.258170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-04-26 15:03:26.258198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-04-26 15:03:26.258573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-04-26 15:03:26.258936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.622 [2024-04-26 15:03:26.258966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.622 qpair failed and we were unable to recover it. 00:26:43.622 [2024-04-26 15:03:26.259309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.892 [2024-04-26 15:03:26.259680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.892 [2024-04-26 15:03:26.259708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.892 qpair failed and we were unable to recover it. 00:26:43.892 [2024-04-26 15:03:26.260166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.892 [2024-04-26 15:03:26.260496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.892 [2024-04-26 15:03:26.260532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.892 qpair failed and we were unable to recover it. 00:26:43.892 [2024-04-26 15:03:26.260793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.892 [2024-04-26 15:03:26.262788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.892 [2024-04-26 15:03:26.262868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.892 qpair failed and we were unable to recover it. 00:26:43.892 [2024-04-26 15:03:26.263285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.892 [2024-04-26 15:03:26.263654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.892 [2024-04-26 15:03:26.263683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.892 qpair failed and we were unable to recover it. 00:26:43.892 [2024-04-26 15:03:26.263981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.892 [2024-04-26 15:03:26.264381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.892 [2024-04-26 15:03:26.264409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.892 qpair failed and we were unable to recover it. 00:26:43.892 [2024-04-26 15:03:26.264786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.892 [2024-04-26 15:03:26.265136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.892 [2024-04-26 15:03:26.265164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.892 qpair failed and we were unable to recover it. 00:26:43.892 [2024-04-26 15:03:26.265432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.892 [2024-04-26 15:03:26.265804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.892 [2024-04-26 15:03:26.265831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.892 qpair failed and we were unable to recover it. 00:26:43.892 [2024-04-26 15:03:26.266220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.892 [2024-04-26 15:03:26.266584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.892 [2024-04-26 15:03:26.266611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.892 qpair failed and we were unable to recover it. 00:26:43.892 [2024-04-26 15:03:26.266978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.892 [2024-04-26 15:03:26.267351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.892 [2024-04-26 15:03:26.267379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.892 qpair failed and we were unable to recover it. 00:26:43.892 [2024-04-26 15:03:26.267745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.892 [2024-04-26 15:03:26.268118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.892 [2024-04-26 15:03:26.268146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.892 qpair failed and we were unable to recover it. 00:26:43.892 [2024-04-26 15:03:26.268539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.892 [2024-04-26 15:03:26.268910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.892 [2024-04-26 15:03:26.268939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.892 qpair failed and we were unable to recover it. 00:26:43.892 [2024-04-26 15:03:26.269316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.892 [2024-04-26 15:03:26.269657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.892 [2024-04-26 15:03:26.269692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.892 qpair failed and we were unable to recover it. 00:26:43.892 [2024-04-26 15:03:26.270084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.892 [2024-04-26 15:03:26.270460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.892 [2024-04-26 15:03:26.270488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.892 qpair failed and we were unable to recover it. 00:26:43.893 [2024-04-26 15:03:26.270858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.271210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.271237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.893 qpair failed and we were unable to recover it. 00:26:43.893 [2024-04-26 15:03:26.271591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.271951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.271980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.893 qpair failed and we were unable to recover it. 00:26:43.893 [2024-04-26 15:03:26.272379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.272613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.272642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.893 qpair failed and we were unable to recover it. 00:26:43.893 [2024-04-26 15:03:26.273002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.273366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.273393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.893 qpair failed and we were unable to recover it. 00:26:43.893 [2024-04-26 15:03:26.273755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.274123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.274151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.893 qpair failed and we were unable to recover it. 00:26:43.893 [2024-04-26 15:03:26.274518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.274922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.274949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.893 qpair failed and we were unable to recover it. 00:26:43.893 [2024-04-26 15:03:26.275322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.275686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.275712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.893 qpair failed and we were unable to recover it. 00:26:43.893 [2024-04-26 15:03:26.276115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.276478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.276505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.893 qpair failed and we were unable to recover it. 00:26:43.893 [2024-04-26 15:03:26.276885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.277260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.277293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.893 qpair failed and we were unable to recover it. 00:26:43.893 [2024-04-26 15:03:26.277670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.278058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.278088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.893 qpair failed and we were unable to recover it. 00:26:43.893 [2024-04-26 15:03:26.278471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.278831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.278872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.893 qpair failed and we were unable to recover it. 00:26:43.893 [2024-04-26 15:03:26.279294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.279645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.279672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.893 qpair failed and we were unable to recover it. 00:26:43.893 [2024-04-26 15:03:26.280042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.280386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.280413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.893 qpair failed and we were unable to recover it. 00:26:43.893 [2024-04-26 15:03:26.280771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.281142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.281171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.893 qpair failed and we were unable to recover it. 00:26:43.893 [2024-04-26 15:03:26.281621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.281869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.281896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.893 qpair failed and we were unable to recover it. 00:26:43.893 [2024-04-26 15:03:26.282274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.282665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.282691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.893 qpair failed and we were unable to recover it. 00:26:43.893 [2024-04-26 15:03:26.283076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.283448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.283475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.893 qpair failed and we were unable to recover it. 00:26:43.893 [2024-04-26 15:03:26.283736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.284074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.284104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.893 qpair failed and we were unable to recover it. 00:26:43.893 [2024-04-26 15:03:26.284474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.284725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.284752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.893 qpair failed and we were unable to recover it. 00:26:43.893 [2024-04-26 15:03:26.285132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.285482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.285511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.893 qpair failed and we were unable to recover it. 00:26:43.893 [2024-04-26 15:03:26.285766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.286093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.286121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.893 qpair failed and we were unable to recover it. 00:26:43.893 [2024-04-26 15:03:26.286513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.286790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.286817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.893 qpair failed and we were unable to recover it. 00:26:43.893 [2024-04-26 15:03:26.287127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.287496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.287523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.893 qpair failed and we were unable to recover it. 00:26:43.893 [2024-04-26 15:03:26.287778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.288141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.288170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.893 qpair failed and we were unable to recover it. 00:26:43.893 [2024-04-26 15:03:26.288533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.893 [2024-04-26 15:03:26.288902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.288930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.894 qpair failed and we were unable to recover it. 00:26:43.894 [2024-04-26 15:03:26.289301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.289670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.289697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.894 qpair failed and we were unable to recover it. 00:26:43.894 [2024-04-26 15:03:26.289908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.290184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.290212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.894 qpair failed and we were unable to recover it. 00:26:43.894 [2024-04-26 15:03:26.290616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.290973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.291001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.894 qpair failed and we were unable to recover it. 00:26:43.894 [2024-04-26 15:03:26.291371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.291741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.291767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.894 qpair failed and we were unable to recover it. 00:26:43.894 [2024-04-26 15:03:26.292203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.292617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.292643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.894 qpair failed and we were unable to recover it. 00:26:43.894 [2024-04-26 15:03:26.292964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.293309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.293335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.894 qpair failed and we were unable to recover it. 00:26:43.894 [2024-04-26 15:03:26.293711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.294086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.294114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.894 qpair failed and we were unable to recover it. 00:26:43.894 [2024-04-26 15:03:26.294480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.294829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.294868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.894 qpair failed and we were unable to recover it. 00:26:43.894 [2024-04-26 15:03:26.295287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.295613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.295640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.894 qpair failed and we were unable to recover it. 00:26:43.894 [2024-04-26 15:03:26.296004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.296370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.296398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.894 qpair failed and we were unable to recover it. 00:26:43.894 [2024-04-26 15:03:26.296780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.297140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.297170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.894 qpair failed and we were unable to recover it. 00:26:43.894 [2024-04-26 15:03:26.297549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.297917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.297945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.894 qpair failed and we were unable to recover it. 00:26:43.894 [2024-04-26 15:03:26.298190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.298461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.298489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.894 qpair failed and we were unable to recover it. 00:26:43.894 [2024-04-26 15:03:26.298879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.299257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.299284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.894 qpair failed and we were unable to recover it. 00:26:43.894 [2024-04-26 15:03:26.299639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.299982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.300011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.894 qpair failed and we were unable to recover it. 00:26:43.894 [2024-04-26 15:03:26.300381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.300748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.300774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.894 qpair failed and we were unable to recover it. 00:26:43.894 [2024-04-26 15:03:26.301140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.301527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.301554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.894 qpair failed and we were unable to recover it. 00:26:43.894 [2024-04-26 15:03:26.301907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.302093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.302119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.894 qpair failed and we were unable to recover it. 00:26:43.894 [2024-04-26 15:03:26.302572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.302910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.302938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.894 qpair failed and we were unable to recover it. 00:26:43.894 [2024-04-26 15:03:26.303405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.303772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.303801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.894 qpair failed and we were unable to recover it. 00:26:43.894 [2024-04-26 15:03:26.304233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.304578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.304606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.894 qpair failed and we were unable to recover it. 00:26:43.894 [2024-04-26 15:03:26.304984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.305340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.305367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.894 qpair failed and we were unable to recover it. 00:26:43.894 [2024-04-26 15:03:26.305722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.305949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.305979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.894 qpair failed and we were unable to recover it. 00:26:43.894 [2024-04-26 15:03:26.306221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.306591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.306618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.894 qpair failed and we were unable to recover it. 00:26:43.894 [2024-04-26 15:03:26.306879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.307247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.894 [2024-04-26 15:03:26.307274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.894 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.307657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.307993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.308022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.308284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.308652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.308679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.309042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.309402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.309429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.309817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.310195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.310223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.310480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.310831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.310871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.311284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.311652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.311679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.311910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.312328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.312355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.312743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.312994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.313023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.313290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.313627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.313654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.313917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.314285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.314313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.314705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.314946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.314978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.315227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.315578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.315605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.315980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.316276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.316302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.316673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.316901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.316931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.317291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.317526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.317552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.317786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.318191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.318219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.318472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.318716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.318743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.319134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.319510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.319538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.319901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.320277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.320304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.320664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.321030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.321058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.321512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.321861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.321888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.322245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.322614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.322641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.323007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.323270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.323296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.323672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.324033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.324062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.324310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.324605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.324631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.324897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.325128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.325155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.325431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.325798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.325825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.895 [2024-04-26 15:03:26.326180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.326431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.895 [2024-04-26 15:03:26.326461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.895 qpair failed and we were unable to recover it. 00:26:43.896 [2024-04-26 15:03:26.326815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.327154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.327182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.896 qpair failed and we were unable to recover it. 00:26:43.896 [2024-04-26 15:03:26.327413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.327786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.327813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.896 qpair failed and we were unable to recover it. 00:26:43.896 [2024-04-26 15:03:26.328182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.328518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.328545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.896 qpair failed and we were unable to recover it. 00:26:43.896 [2024-04-26 15:03:26.328914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.329195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.329222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.896 qpair failed and we were unable to recover it. 00:26:43.896 [2024-04-26 15:03:26.329590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.329912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.329940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.896 qpair failed and we were unable to recover it. 00:26:43.896 [2024-04-26 15:03:26.330182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.330553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.330580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.896 qpair failed and we were unable to recover it. 00:26:43.896 [2024-04-26 15:03:26.330981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.331355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.331383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.896 qpair failed and we were unable to recover it. 00:26:43.896 [2024-04-26 15:03:26.331633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.331987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.332015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.896 qpair failed and we were unable to recover it. 00:26:43.896 [2024-04-26 15:03:26.332385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.332736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.332762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.896 qpair failed and we were unable to recover it. 00:26:43.896 [2024-04-26 15:03:26.333117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.333493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.333521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.896 qpair failed and we were unable to recover it. 00:26:43.896 [2024-04-26 15:03:26.333889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.334301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.334328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.896 qpair failed and we were unable to recover it. 00:26:43.896 [2024-04-26 15:03:26.334669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.335017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.335045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.896 qpair failed and we were unable to recover it. 00:26:43.896 [2024-04-26 15:03:26.335433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.335669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.335698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.896 qpair failed and we were unable to recover it. 00:26:43.896 [2024-04-26 15:03:26.336052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.336411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.336437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.896 qpair failed and we were unable to recover it. 00:26:43.896 [2024-04-26 15:03:26.336833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.337312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.337340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.896 qpair failed and we were unable to recover it. 00:26:43.896 [2024-04-26 15:03:26.337728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.338080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.338108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.896 qpair failed and we were unable to recover it. 00:26:43.896 [2024-04-26 15:03:26.338469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.338700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.338729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.896 qpair failed and we were unable to recover it. 00:26:43.896 [2024-04-26 15:03:26.339074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.339409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.339435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.896 qpair failed and we were unable to recover it. 00:26:43.896 [2024-04-26 15:03:26.339810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.340225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.340253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.896 qpair failed and we were unable to recover it. 00:26:43.896 [2024-04-26 15:03:26.340589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.340929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.340957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.896 qpair failed and we were unable to recover it. 00:26:43.896 [2024-04-26 15:03:26.341321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.341706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.341734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.896 qpair failed and we were unable to recover it. 00:26:43.896 [2024-04-26 15:03:26.342018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.342387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.342415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.896 qpair failed and we were unable to recover it. 00:26:43.896 [2024-04-26 15:03:26.342722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.343075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.343104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.896 qpair failed and we were unable to recover it. 00:26:43.896 [2024-04-26 15:03:26.343517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.343882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.343911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.896 qpair failed and we were unable to recover it. 00:26:43.896 [2024-04-26 15:03:26.344279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.896 [2024-04-26 15:03:26.344670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.344697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.897 qpair failed and we were unable to recover it. 00:26:43.897 [2024-04-26 15:03:26.345133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.345480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.345508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.897 qpair failed and we were unable to recover it. 00:26:43.897 [2024-04-26 15:03:26.345734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.345987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.346019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.897 qpair failed and we were unable to recover it. 00:26:43.897 [2024-04-26 15:03:26.346387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.346754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.346781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.897 qpair failed and we were unable to recover it. 00:26:43.897 [2024-04-26 15:03:26.347155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.347403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.347429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.897 qpair failed and we were unable to recover it. 00:26:43.897 [2024-04-26 15:03:26.347834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.348220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.348247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.897 qpair failed and we were unable to recover it. 00:26:43.897 [2024-04-26 15:03:26.348594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.348940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.348968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.897 qpair failed and we were unable to recover it. 00:26:43.897 [2024-04-26 15:03:26.349335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.349704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.349732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.897 qpair failed and we were unable to recover it. 00:26:43.897 [2024-04-26 15:03:26.350168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.350524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.350551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.897 qpair failed and we were unable to recover it. 00:26:43.897 [2024-04-26 15:03:26.350786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.351166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.351195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.897 qpair failed and we were unable to recover it. 00:26:43.897 [2024-04-26 15:03:26.351461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.351803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.351830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.897 qpair failed and we were unable to recover it. 00:26:43.897 [2024-04-26 15:03:26.352245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.352574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.352602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.897 qpair failed and we were unable to recover it. 00:26:43.897 [2024-04-26 15:03:26.352978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.353332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.353358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.897 qpair failed and we were unable to recover it. 00:26:43.897 [2024-04-26 15:03:26.353715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.353963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.353992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.897 qpair failed and we were unable to recover it. 00:26:43.897 [2024-04-26 15:03:26.354358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.354710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.354737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.897 qpair failed and we were unable to recover it. 00:26:43.897 [2024-04-26 15:03:26.355108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.355355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.355384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.897 qpair failed and we were unable to recover it. 00:26:43.897 [2024-04-26 15:03:26.355778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.356148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.356177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.897 qpair failed and we were unable to recover it. 00:26:43.897 [2024-04-26 15:03:26.356553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.356935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.356963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.897 qpair failed and we were unable to recover it. 00:26:43.897 [2024-04-26 15:03:26.357363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.357743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.357769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.897 qpair failed and we were unable to recover it. 00:26:43.897 [2024-04-26 15:03:26.358157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.358492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.358519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.897 qpair failed and we were unable to recover it. 00:26:43.897 [2024-04-26 15:03:26.358902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.359260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.359286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.897 qpair failed and we were unable to recover it. 00:26:43.897 [2024-04-26 15:03:26.359662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.360033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.360061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.897 qpair failed and we were unable to recover it. 00:26:43.897 [2024-04-26 15:03:26.360338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.897 [2024-04-26 15:03:26.360695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.360722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.898 qpair failed and we were unable to recover it. 00:26:43.898 [2024-04-26 15:03:26.361081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.361491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.361517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.898 qpair failed and we were unable to recover it. 00:26:43.898 [2024-04-26 15:03:26.361765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.362175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.362203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.898 qpair failed and we were unable to recover it. 00:26:43.898 [2024-04-26 15:03:26.362599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.362999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.363029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.898 qpair failed and we were unable to recover it. 00:26:43.898 [2024-04-26 15:03:26.363471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.363826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.363886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.898 qpair failed and we were unable to recover it. 00:26:43.898 [2024-04-26 15:03:26.364285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.364653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.364680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.898 qpair failed and we were unable to recover it. 00:26:43.898 [2024-04-26 15:03:26.365041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.365410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.365436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.898 qpair failed and we were unable to recover it. 00:26:43.898 [2024-04-26 15:03:26.365808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.366187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.366216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.898 qpair failed and we were unable to recover it. 00:26:43.898 [2024-04-26 15:03:26.366575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.366965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.366993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.898 qpair failed and we were unable to recover it. 00:26:43.898 [2024-04-26 15:03:26.367251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.367599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.367625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.898 qpair failed and we were unable to recover it. 00:26:43.898 [2024-04-26 15:03:26.367978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.368355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.368383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.898 qpair failed and we were unable to recover it. 00:26:43.898 [2024-04-26 15:03:26.368765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.369138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.369166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.898 qpair failed and we were unable to recover it. 00:26:43.898 [2024-04-26 15:03:26.369443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.369875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.369904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.898 qpair failed and we were unable to recover it. 00:26:43.898 [2024-04-26 15:03:26.370278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.370650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.370677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.898 qpair failed and we were unable to recover it. 00:26:43.898 [2024-04-26 15:03:26.371076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.371485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.371512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.898 qpair failed and we were unable to recover it. 00:26:43.898 [2024-04-26 15:03:26.371870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.372227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.372265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.898 qpair failed and we were unable to recover it. 00:26:43.898 [2024-04-26 15:03:26.372614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.372957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.372985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.898 qpair failed and we were unable to recover it. 00:26:43.898 [2024-04-26 15:03:26.373346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.373707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.373733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.898 qpair failed and we were unable to recover it. 00:26:43.898 [2024-04-26 15:03:26.374098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.374460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.374488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.898 qpair failed and we were unable to recover it. 00:26:43.898 [2024-04-26 15:03:26.374773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.375148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.375176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.898 qpair failed and we were unable to recover it. 00:26:43.898 [2024-04-26 15:03:26.375532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.375864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.375891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.898 qpair failed and we were unable to recover it. 00:26:43.898 [2024-04-26 15:03:26.376286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.376651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.376678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.898 qpair failed and we were unable to recover it. 00:26:43.898 [2024-04-26 15:03:26.376918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.377320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.377347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.898 qpair failed and we were unable to recover it. 00:26:43.898 [2024-04-26 15:03:26.377686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.378044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.378074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.898 qpair failed and we were unable to recover it. 00:26:43.898 [2024-04-26 15:03:26.378352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.378738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.378765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.898 qpair failed and we were unable to recover it. 00:26:43.898 [2024-04-26 15:03:26.379009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.379372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.379406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.898 qpair failed and we were unable to recover it. 00:26:43.898 [2024-04-26 15:03:26.379817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.380193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.380221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.898 qpair failed and we were unable to recover it. 00:26:43.898 [2024-04-26 15:03:26.380486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.898 [2024-04-26 15:03:26.380833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.380873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.899 qpair failed and we were unable to recover it. 00:26:43.899 [2024-04-26 15:03:26.381240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.381610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.381637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.899 qpair failed and we were unable to recover it. 00:26:43.899 [2024-04-26 15:03:26.382024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.382391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.382418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.899 qpair failed and we were unable to recover it. 00:26:43.899 [2024-04-26 15:03:26.382794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.383098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.383126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.899 qpair failed and we were unable to recover it. 00:26:43.899 [2024-04-26 15:03:26.383508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.383863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.383892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.899 qpair failed and we were unable to recover it. 00:26:43.899 [2024-04-26 15:03:26.384234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.384487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.384517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.899 qpair failed and we were unable to recover it. 00:26:43.899 [2024-04-26 15:03:26.384894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.385288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.385316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.899 qpair failed and we were unable to recover it. 00:26:43.899 [2024-04-26 15:03:26.385695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.386040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.386067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.899 qpair failed and we were unable to recover it. 00:26:43.899 [2024-04-26 15:03:26.386415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.386766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.386799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.899 qpair failed and we were unable to recover it. 00:26:43.899 [2024-04-26 15:03:26.387203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.387547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.387574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.899 qpair failed and we were unable to recover it. 00:26:43.899 [2024-04-26 15:03:26.387976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.388348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.388376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.899 qpair failed and we were unable to recover it. 00:26:43.899 [2024-04-26 15:03:26.388739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.389126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.389154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.899 qpair failed and we were unable to recover it. 00:26:43.899 [2024-04-26 15:03:26.389516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.389938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.389966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.899 qpair failed and we were unable to recover it. 00:26:43.899 [2024-04-26 15:03:26.390352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.390702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.390729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.899 qpair failed and we were unable to recover it. 00:26:43.899 [2024-04-26 15:03:26.391086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.391339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.391368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.899 qpair failed and we were unable to recover it. 00:26:43.899 [2024-04-26 15:03:26.391729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.392139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.392167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.899 qpair failed and we were unable to recover it. 00:26:43.899 [2024-04-26 15:03:26.392535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.392899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.392928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.899 qpair failed and we were unable to recover it. 00:26:43.899 [2024-04-26 15:03:26.393307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.393689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.393716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.899 qpair failed and we were unable to recover it. 00:26:43.899 [2024-04-26 15:03:26.394100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.394488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.394515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.899 qpair failed and we were unable to recover it. 00:26:43.899 [2024-04-26 15:03:26.394868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.395234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.395260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.899 qpair failed and we were unable to recover it. 00:26:43.899 [2024-04-26 15:03:26.395625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.395996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.396025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.899 qpair failed and we were unable to recover it. 00:26:43.899 [2024-04-26 15:03:26.396471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.396719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.396747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.899 qpair failed and we were unable to recover it. 00:26:43.899 [2024-04-26 15:03:26.397137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.397490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.397517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.899 qpair failed and we were unable to recover it. 00:26:43.899 [2024-04-26 15:03:26.397886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.398242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.398269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.899 qpair failed and we were unable to recover it. 00:26:43.899 [2024-04-26 15:03:26.398627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.398836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.398877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.899 qpair failed and we were unable to recover it. 00:26:43.899 [2024-04-26 15:03:26.399265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.399603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.399629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.899 qpair failed and we were unable to recover it. 00:26:43.899 [2024-04-26 15:03:26.400005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.400332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.899 [2024-04-26 15:03:26.400359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.900 qpair failed and we were unable to recover it. 00:26:43.900 [2024-04-26 15:03:26.400730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.401095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.401125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.900 qpair failed and we were unable to recover it. 00:26:43.900 [2024-04-26 15:03:26.401478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.401863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.401891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.900 qpair failed and we were unable to recover it. 00:26:43.900 [2024-04-26 15:03:26.402290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.402634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.402660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.900 qpair failed and we were unable to recover it. 00:26:43.900 [2024-04-26 15:03:26.403054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.403435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.403461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.900 qpair failed and we were unable to recover it. 00:26:43.900 [2024-04-26 15:03:26.403828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.404207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.404234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.900 qpair failed and we were unable to recover it. 00:26:43.900 [2024-04-26 15:03:26.404592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.404965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.404995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.900 qpair failed and we were unable to recover it. 00:26:43.900 [2024-04-26 15:03:26.405256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.405605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.405632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.900 qpair failed and we were unable to recover it. 00:26:43.900 [2024-04-26 15:03:26.405988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.406225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.406254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.900 qpair failed and we were unable to recover it. 00:26:43.900 [2024-04-26 15:03:26.406603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.406970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.406997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.900 qpair failed and we were unable to recover it. 00:26:43.900 [2024-04-26 15:03:26.407376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.407736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.407761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.900 qpair failed and we were unable to recover it. 00:26:43.900 [2024-04-26 15:03:26.408140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.408412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.408438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.900 qpair failed and we were unable to recover it. 00:26:43.900 [2024-04-26 15:03:26.408571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.408808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.408836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.900 qpair failed and we were unable to recover it. 00:26:43.900 [2024-04-26 15:03:26.409239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.409607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.409633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.900 qpair failed and we were unable to recover it. 00:26:43.900 [2024-04-26 15:03:26.410022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.410407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.410432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.900 qpair failed and we were unable to recover it. 00:26:43.900 [2024-04-26 15:03:26.410779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.411140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.411168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.900 qpair failed and we were unable to recover it. 00:26:43.900 [2024-04-26 15:03:26.411520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.411897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.411925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.900 qpair failed and we were unable to recover it. 00:26:43.900 [2024-04-26 15:03:26.412294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.412676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.412702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.900 qpair failed and we were unable to recover it. 00:26:43.900 [2024-04-26 15:03:26.412956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.413220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.413247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.900 qpair failed and we were unable to recover it. 00:26:43.900 [2024-04-26 15:03:26.413611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.413982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.414012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.900 qpair failed and we were unable to recover it. 00:26:43.900 [2024-04-26 15:03:26.414410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.414773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.414801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.900 qpair failed and we were unable to recover it. 00:26:43.900 [2024-04-26 15:03:26.414999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.415408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.415436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.900 qpair failed and we were unable to recover it. 00:26:43.900 [2024-04-26 15:03:26.415627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.415869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.415897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.900 qpair failed and we were unable to recover it. 00:26:43.900 [2024-04-26 15:03:26.416291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.416655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.416682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.900 qpair failed and we were unable to recover it. 00:26:43.900 [2024-04-26 15:03:26.417073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.417425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.417451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.900 qpair failed and we were unable to recover it. 00:26:43.900 [2024-04-26 15:03:26.417696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.418081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.418109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.900 qpair failed and we were unable to recover it. 00:26:43.900 [2024-04-26 15:03:26.418484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.418859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.900 [2024-04-26 15:03:26.418887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.901 qpair failed and we were unable to recover it. 00:26:43.901 [2024-04-26 15:03:26.419144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.419508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.419535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.901 qpair failed and we were unable to recover it. 00:26:43.901 [2024-04-26 15:03:26.419912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.420305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.420332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.901 qpair failed and we were unable to recover it. 00:26:43.901 [2024-04-26 15:03:26.420606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.420858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.420886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.901 qpair failed and we were unable to recover it. 00:26:43.901 [2024-04-26 15:03:26.421278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.421646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.421673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.901 qpair failed and we were unable to recover it. 00:26:43.901 [2024-04-26 15:03:26.421915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.422176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.422205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.901 qpair failed and we were unable to recover it. 00:26:43.901 [2024-04-26 15:03:26.422527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.422908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.422936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.901 qpair failed and we were unable to recover it. 00:26:43.901 [2024-04-26 15:03:26.423321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.423694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.423721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.901 qpair failed and we were unable to recover it. 00:26:43.901 [2024-04-26 15:03:26.423981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.424350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.424377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.901 qpair failed and we were unable to recover it. 00:26:43.901 [2024-04-26 15:03:26.424720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.424965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.424993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.901 qpair failed and we were unable to recover it. 00:26:43.901 [2024-04-26 15:03:26.425355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.425729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.425756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.901 qpair failed and we were unable to recover it. 00:26:43.901 [2024-04-26 15:03:26.426027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.426386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.426412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.901 qpair failed and we were unable to recover it. 00:26:43.901 [2024-04-26 15:03:26.426797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.427088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.427116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.901 qpair failed and we were unable to recover it. 00:26:43.901 [2024-04-26 15:03:26.427495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.427874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.427902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.901 qpair failed and we were unable to recover it. 00:26:43.901 [2024-04-26 15:03:26.428164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.428529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.428556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.901 qpair failed and we were unable to recover it. 00:26:43.901 [2024-04-26 15:03:26.428829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.429219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.429246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.901 qpair failed and we were unable to recover it. 00:26:43.901 [2024-04-26 15:03:26.429648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.430091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.430121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.901 qpair failed and we were unable to recover it. 00:26:43.901 [2024-04-26 15:03:26.430504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.430869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.430914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.901 qpair failed and we were unable to recover it. 00:26:43.901 [2024-04-26 15:03:26.431316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.431553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.431579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.901 qpair failed and we were unable to recover it. 00:26:43.901 [2024-04-26 15:03:26.432022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.432381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.432408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.901 qpair failed and we were unable to recover it. 00:26:43.901 [2024-04-26 15:03:26.432575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.432956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.432984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.901 qpair failed and we were unable to recover it. 00:26:43.901 [2024-04-26 15:03:26.433343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.433717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.433743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.901 qpair failed and we were unable to recover it. 00:26:43.901 [2024-04-26 15:03:26.433983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.434251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.434278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.901 qpair failed and we were unable to recover it. 00:26:43.901 [2024-04-26 15:03:26.434627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.434971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.434999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.901 qpair failed and we were unable to recover it. 00:26:43.901 [2024-04-26 15:03:26.435380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.435697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.435723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.901 qpair failed and we were unable to recover it. 00:26:43.901 [2024-04-26 15:03:26.436102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.436342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.901 [2024-04-26 15:03:26.436368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.901 qpair failed and we were unable to recover it. 00:26:43.901 [2024-04-26 15:03:26.436765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.437109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.437138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.902 qpair failed and we were unable to recover it. 00:26:43.902 [2024-04-26 15:03:26.437512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.437756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.437783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.902 qpair failed and we were unable to recover it. 00:26:43.902 [2024-04-26 15:03:26.438135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.438488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.438517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.902 qpair failed and we were unable to recover it. 00:26:43.902 [2024-04-26 15:03:26.438781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.439159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.439190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.902 qpair failed and we were unable to recover it. 00:26:43.902 [2024-04-26 15:03:26.439543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.439907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.439935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.902 qpair failed and we were unable to recover it. 00:26:43.902 [2024-04-26 15:03:26.440305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.440666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.440697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.902 qpair failed and we were unable to recover it. 00:26:43.902 [2024-04-26 15:03:26.441070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.441433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.441461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.902 qpair failed and we were unable to recover it. 00:26:43.902 [2024-04-26 15:03:26.441831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.442083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.442111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.902 qpair failed and we were unable to recover it. 00:26:43.902 [2024-04-26 15:03:26.442492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.442851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.442879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.902 qpair failed and we were unable to recover it. 00:26:43.902 [2024-04-26 15:03:26.443310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.443642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.443669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.902 qpair failed and we were unable to recover it. 00:26:43.902 [2024-04-26 15:03:26.444022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.444351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.444378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.902 qpair failed and we were unable to recover it. 00:26:43.902 [2024-04-26 15:03:26.444752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.445105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.445134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.902 qpair failed and we were unable to recover it. 00:26:43.902 [2024-04-26 15:03:26.445518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.445912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.445940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.902 qpair failed and we were unable to recover it. 00:26:43.902 [2024-04-26 15:03:26.446229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.446590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.446617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.902 qpair failed and we were unable to recover it. 00:26:43.902 [2024-04-26 15:03:26.446993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.447358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.447384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.902 qpair failed and we were unable to recover it. 00:26:43.902 [2024-04-26 15:03:26.447766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.448146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.448176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.902 qpair failed and we were unable to recover it. 00:26:43.902 [2024-04-26 15:03:26.448552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.448911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.448940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.902 qpair failed and we were unable to recover it. 00:26:43.902 [2024-04-26 15:03:26.449300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.449646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.449673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.902 qpair failed and we were unable to recover it. 00:26:43.902 [2024-04-26 15:03:26.450081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.450527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.450554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.902 qpair failed and we were unable to recover it. 00:26:43.902 [2024-04-26 15:03:26.450905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.451269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.451295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.902 qpair failed and we were unable to recover it. 00:26:43.902 [2024-04-26 15:03:26.451681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.452023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.902 [2024-04-26 15:03:26.452051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.903 qpair failed and we were unable to recover it. 00:26:43.903 [2024-04-26 15:03:26.452468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.452835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.452877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.903 qpair failed and we were unable to recover it. 00:26:43.903 [2024-04-26 15:03:26.453280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.453659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.453687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.903 qpair failed and we were unable to recover it. 00:26:43.903 [2024-04-26 15:03:26.454035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.454276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.454303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.903 qpair failed and we were unable to recover it. 00:26:43.903 [2024-04-26 15:03:26.454565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.454907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.454935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.903 qpair failed and we were unable to recover it. 00:26:43.903 [2024-04-26 15:03:26.455313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.455542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.455568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.903 qpair failed and we were unable to recover it. 00:26:43.903 [2024-04-26 15:03:26.455927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.456316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.456343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.903 qpair failed and we were unable to recover it. 00:26:43.903 [2024-04-26 15:03:26.456606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.456972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.457001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.903 qpair failed and we were unable to recover it. 00:26:43.903 [2024-04-26 15:03:26.457447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.457822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.457862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.903 qpair failed and we were unable to recover it. 00:26:43.903 [2024-04-26 15:03:26.458113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.458500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.458527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.903 qpair failed and we were unable to recover it. 00:26:43.903 [2024-04-26 15:03:26.458870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.459245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.459272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.903 qpair failed and we were unable to recover it. 00:26:43.903 [2024-04-26 15:03:26.459636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.460021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.460051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.903 qpair failed and we were unable to recover it. 00:26:43.903 [2024-04-26 15:03:26.460416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.460793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.460820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.903 qpair failed and we were unable to recover it. 00:26:43.903 [2024-04-26 15:03:26.461217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.461586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.461613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.903 qpair failed and we were unable to recover it. 00:26:43.903 [2024-04-26 15:03:26.461999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.462343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.462370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.903 qpair failed and we were unable to recover it. 00:26:43.903 [2024-04-26 15:03:26.462742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.463112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.463140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.903 qpair failed and we were unable to recover it. 00:26:43.903 [2024-04-26 15:03:26.463425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.463818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.463876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.903 qpair failed and we were unable to recover it. 00:26:43.903 [2024-04-26 15:03:26.464238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.464605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.464632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.903 qpair failed and we were unable to recover it. 00:26:43.903 [2024-04-26 15:03:26.465012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.465388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.465414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.903 qpair failed and we were unable to recover it. 00:26:43.903 [2024-04-26 15:03:26.465767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.466034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.466063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.903 qpair failed and we were unable to recover it. 00:26:43.903 [2024-04-26 15:03:26.466440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.466780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.466807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.903 qpair failed and we were unable to recover it. 00:26:43.903 [2024-04-26 15:03:26.467196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.467624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.467652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.903 qpair failed and we were unable to recover it. 00:26:43.903 [2024-04-26 15:03:26.468054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.468448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.468475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.903 qpair failed and we were unable to recover it. 00:26:43.903 [2024-04-26 15:03:26.468857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.469268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.469296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.903 qpair failed and we were unable to recover it. 00:26:43.903 [2024-04-26 15:03:26.469530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.469892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.469920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.903 qpair failed and we were unable to recover it. 00:26:43.903 [2024-04-26 15:03:26.470313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.470692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.470718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.903 qpair failed and we were unable to recover it. 00:26:43.903 [2024-04-26 15:03:26.471120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.903 [2024-04-26 15:03:26.471491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.471518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.904 qpair failed and we were unable to recover it. 00:26:43.904 [2024-04-26 15:03:26.471876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.472277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.472304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.904 qpair failed and we were unable to recover it. 00:26:43.904 [2024-04-26 15:03:26.472587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.472957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.472985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.904 qpair failed and we were unable to recover it. 00:26:43.904 [2024-04-26 15:03:26.473351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.473721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.473748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.904 qpair failed and we were unable to recover it. 00:26:43.904 [2024-04-26 15:03:26.474102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.474447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.474474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.904 qpair failed and we were unable to recover it. 00:26:43.904 [2024-04-26 15:03:26.474828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.475248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.475275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.904 qpair failed and we were unable to recover it. 00:26:43.904 [2024-04-26 15:03:26.475527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.475799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.475829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.904 qpair failed and we were unable to recover it. 00:26:43.904 [2024-04-26 15:03:26.476314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.476664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.476692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.904 qpair failed and we were unable to recover it. 00:26:43.904 [2024-04-26 15:03:26.477072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.477415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.477442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.904 qpair failed and we were unable to recover it. 00:26:43.904 [2024-04-26 15:03:26.477782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.478165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.478194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.904 qpair failed and we were unable to recover it. 00:26:43.904 [2024-04-26 15:03:26.478440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.478792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.478819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.904 qpair failed and we were unable to recover it. 00:26:43.904 [2024-04-26 15:03:26.479144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.479508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.479534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.904 qpair failed and we were unable to recover it. 00:26:43.904 [2024-04-26 15:03:26.479899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.480273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.480301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.904 qpair failed and we were unable to recover it. 00:26:43.904 [2024-04-26 15:03:26.480558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.480983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.481012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.904 qpair failed and we were unable to recover it. 00:26:43.904 [2024-04-26 15:03:26.481407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.481771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.481798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.904 qpair failed and we were unable to recover it. 00:26:43.904 [2024-04-26 15:03:26.482186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.482549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.482586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.904 qpair failed and we were unable to recover it. 00:26:43.904 [2024-04-26 15:03:26.483018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.483372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.483398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.904 qpair failed and we were unable to recover it. 00:26:43.904 [2024-04-26 15:03:26.483770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.484113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.484142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.904 qpair failed and we were unable to recover it. 00:26:43.904 [2024-04-26 15:03:26.484558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.484928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.484956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.904 qpair failed and we were unable to recover it. 00:26:43.904 [2024-04-26 15:03:26.485349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.485715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.485742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.904 qpair failed and we were unable to recover it. 00:26:43.904 [2024-04-26 15:03:26.486009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.486383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.486410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.904 qpair failed and we were unable to recover it. 00:26:43.904 [2024-04-26 15:03:26.486781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.487139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.487168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.904 qpair failed and we were unable to recover it. 00:26:43.904 [2024-04-26 15:03:26.487548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.487889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.487917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.904 qpair failed and we were unable to recover it. 00:26:43.904 [2024-04-26 15:03:26.488305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.488656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.488682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.904 qpair failed and we were unable to recover it. 00:26:43.904 [2024-04-26 15:03:26.488939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.489345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.489372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.904 qpair failed and we were unable to recover it. 00:26:43.904 [2024-04-26 15:03:26.489736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.490079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.490113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.904 qpair failed and we were unable to recover it. 00:26:43.904 [2024-04-26 15:03:26.490501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.904 [2024-04-26 15:03:26.490861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.490890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.905 qpair failed and we were unable to recover it. 00:26:43.905 [2024-04-26 15:03:26.491266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.491633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.491660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.905 qpair failed and we were unable to recover it. 00:26:43.905 [2024-04-26 15:03:26.492015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.492369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.492396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.905 qpair failed and we were unable to recover it. 00:26:43.905 [2024-04-26 15:03:26.492748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.493080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.493109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.905 qpair failed and we were unable to recover it. 00:26:43.905 [2024-04-26 15:03:26.493488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.493770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.493796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.905 qpair failed and we were unable to recover it. 00:26:43.905 [2024-04-26 15:03:26.494166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.494439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.494466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.905 qpair failed and we were unable to recover it. 00:26:43.905 [2024-04-26 15:03:26.494868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.495324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.495350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.905 qpair failed and we were unable to recover it. 00:26:43.905 [2024-04-26 15:03:26.495734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.496107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.496136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.905 qpair failed and we were unable to recover it. 00:26:43.905 [2024-04-26 15:03:26.496507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.496880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.496908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.905 qpair failed and we were unable to recover it. 00:26:43.905 [2024-04-26 15:03:26.497278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.497553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.497585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.905 qpair failed and we were unable to recover it. 00:26:43.905 [2024-04-26 15:03:26.497863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.498210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.498237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.905 qpair failed and we were unable to recover it. 00:26:43.905 [2024-04-26 15:03:26.498591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.498954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.498983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.905 qpair failed and we were unable to recover it. 00:26:43.905 [2024-04-26 15:03:26.499344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.499689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.499716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.905 qpair failed and we were unable to recover it. 00:26:43.905 [2024-04-26 15:03:26.500077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.500465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.500492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.905 qpair failed and we were unable to recover it. 00:26:43.905 [2024-04-26 15:03:26.500857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.501227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.501255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.905 qpair failed and we were unable to recover it. 00:26:43.905 [2024-04-26 15:03:26.501644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.501993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.502022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.905 qpair failed and we were unable to recover it. 00:26:43.905 [2024-04-26 15:03:26.502368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.502618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.502648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.905 qpair failed and we were unable to recover it. 00:26:43.905 [2024-04-26 15:03:26.503010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.503376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.503403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.905 qpair failed and we were unable to recover it. 00:26:43.905 [2024-04-26 15:03:26.503761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.504141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.504170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.905 qpair failed and we were unable to recover it. 00:26:43.905 [2024-04-26 15:03:26.504535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.504900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.504934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.905 qpair failed and we were unable to recover it. 00:26:43.905 [2024-04-26 15:03:26.505316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.505686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.505713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.905 qpair failed and we were unable to recover it. 00:26:43.905 [2024-04-26 15:03:26.506117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.506449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.506482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.905 qpair failed and we were unable to recover it. 00:26:43.905 [2024-04-26 15:03:26.506877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.507256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.507283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.905 qpair failed and we were unable to recover it. 00:26:43.905 [2024-04-26 15:03:26.507639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.507984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.508013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.905 qpair failed and we were unable to recover it. 00:26:43.905 [2024-04-26 15:03:26.508375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.508716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.508743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.905 qpair failed and we were unable to recover it. 00:26:43.905 [2024-04-26 15:03:26.509090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.509457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.509485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.905 qpair failed and we were unable to recover it. 00:26:43.905 [2024-04-26 15:03:26.509835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.510219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.905 [2024-04-26 15:03:26.510245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.905 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.510608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.510966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.510994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.511367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.511763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.511790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.512183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.512536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.512563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.512964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.513269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.513297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.513686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.514109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.514137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.514492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.514607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.514636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.514962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.515131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.515159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.515571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.515916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.515945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.516321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.516689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.516716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.517122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.517493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.517521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.517893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.518262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.518289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.518547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.518895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.518924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.519291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.519655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.519681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.520068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.520429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.520456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.520811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.521176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.521204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.521549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.521917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.521946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.522323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.522683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.522711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.523080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.523424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.523452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.523819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.524149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.524176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.524491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.524868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.524895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.525266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.525617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.525644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.526019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.526408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.526435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.526807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.527205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.527234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.527628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.527989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.528018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.528367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.528721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.528748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.529119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.529486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.529512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.529889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.530285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.530311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.530652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.531016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.531044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.906 qpair failed and we were unable to recover it. 00:26:43.906 [2024-04-26 15:03:26.531422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.906 [2024-04-26 15:03:26.531787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.531815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.907 qpair failed and we were unable to recover it. 00:26:43.907 [2024-04-26 15:03:26.532210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.532462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.532488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.907 qpair failed and we were unable to recover it. 00:26:43.907 [2024-04-26 15:03:26.532848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.533210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.533236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.907 qpair failed and we were unable to recover it. 00:26:43.907 [2024-04-26 15:03:26.533490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.533864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.533892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.907 qpair failed and we were unable to recover it. 00:26:43.907 [2024-04-26 15:03:26.534286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.534654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.534680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.907 qpair failed and we were unable to recover it. 00:26:43.907 [2024-04-26 15:03:26.535090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.535408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.535435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.907 qpair failed and we were unable to recover it. 00:26:43.907 [2024-04-26 15:03:26.535824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.536214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.536242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.907 qpair failed and we were unable to recover it. 00:26:43.907 [2024-04-26 15:03:26.536610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.536981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.537009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.907 qpair failed and we were unable to recover it. 00:26:43.907 [2024-04-26 15:03:26.537413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.537777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.537804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.907 qpair failed and we were unable to recover it. 00:26:43.907 [2024-04-26 15:03:26.538234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.538597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.538624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.907 qpair failed and we were unable to recover it. 00:26:43.907 [2024-04-26 15:03:26.538865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.539265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.539292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.907 qpair failed and we were unable to recover it. 00:26:43.907 [2024-04-26 15:03:26.539668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.540036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.540065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.907 qpair failed and we were unable to recover it. 00:26:43.907 [2024-04-26 15:03:26.540425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.540775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.540806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.907 qpair failed and we were unable to recover it. 00:26:43.907 [2024-04-26 15:03:26.541239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.541540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.541567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.907 qpair failed and we were unable to recover it. 00:26:43.907 [2024-04-26 15:03:26.541936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.542177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.542207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.907 qpair failed and we were unable to recover it. 00:26:43.907 [2024-04-26 15:03:26.542469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.542726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.542752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.907 qpair failed and we were unable to recover it. 00:26:43.907 [2024-04-26 15:03:26.543115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.543483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.543511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.907 qpair failed and we were unable to recover it. 00:26:43.907 [2024-04-26 15:03:26.543904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.544268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.544296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.907 qpair failed and we were unable to recover it. 00:26:43.907 [2024-04-26 15:03:26.544726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.544954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.544983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.907 qpair failed and we were unable to recover it. 00:26:43.907 [2024-04-26 15:03:26.545387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.545730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.545757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.907 qpair failed and we were unable to recover it. 00:26:43.907 [2024-04-26 15:03:26.546122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.546369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.546396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.907 qpair failed and we were unable to recover it. 00:26:43.907 [2024-04-26 15:03:26.546823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.547088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.547119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.907 qpair failed and we were unable to recover it. 00:26:43.907 [2024-04-26 15:03:26.547481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.547727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.907 [2024-04-26 15:03:26.547753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.907 qpair failed and we were unable to recover it. 00:26:43.907 [2024-04-26 15:03:26.548129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.908 [2024-04-26 15:03:26.548504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.908 [2024-04-26 15:03:26.548531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:43.908 qpair failed and we were unable to recover it. 00:26:43.908 [2024-04-26 15:03:26.548908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.549171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.549200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-04-26 15:03:26.549472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.549734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.549761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-04-26 15:03:26.550137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.550550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.550577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-04-26 15:03:26.550809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.551171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.551200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-04-26 15:03:26.551506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.551880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.551910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-04-26 15:03:26.552250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.552606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.552632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-04-26 15:03:26.552996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.553347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.553374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-04-26 15:03:26.553726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.553978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.554005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-04-26 15:03:26.554414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.554752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.554778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-04-26 15:03:26.555126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.555467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.555493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-04-26 15:03:26.555878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.556249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.556275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-04-26 15:03:26.556623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.556860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.556888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-04-26 15:03:26.557237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.557486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.557513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-04-26 15:03:26.557865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.558236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.558264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-04-26 15:03:26.558703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.558956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.558986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-04-26 15:03:26.559356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.559713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.559741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-04-26 15:03:26.559997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.560366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.560393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-04-26 15:03:26.560647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.561030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.561058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-04-26 15:03:26.561439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.561808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.561835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-04-26 15:03:26.562038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.562303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.562330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-04-26 15:03:26.562700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.563100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.563128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-04-26 15:03:26.563489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.563715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.563742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-04-26 15:03:26.564097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-04-26 15:03:26.564470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.564498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.564858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.565221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.565248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.565626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.565951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.565978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.566376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.566586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.566612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.566782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.567164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.567193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.567551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.567899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.567928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.568284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.568674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.568701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.569070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.569440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.569467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.569594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.569954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.569983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.570360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.570735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.570762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.571130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.571512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.571538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.571805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.572242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.572271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.572495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.572632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.572659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.573034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.573394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.573422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.573771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.574133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.574162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.574541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.574909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.574938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.575193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.575542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.575568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.575911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.576270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.576297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.576734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.577144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.577172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.577611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.577979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.578007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.578250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.578640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.578667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.579058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.579423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.579451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.579831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.580216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.580243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.580676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.581032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.581061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.581431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.581750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.581778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.582119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.582436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.582464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.582869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.583240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.583268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.583619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.583976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.584005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-04-26 15:03:26.584427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-04-26 15:03:26.584658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.584684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.584938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.585286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.585313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.585529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.585781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.585812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.586216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.586506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.586532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.586926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.587311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.587338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.587766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.588141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.588169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.588428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.588821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.588861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.589218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.589577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.589603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.589982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.590359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.590387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.590754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.591103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.591132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.591497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.591866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.591894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.592259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.592623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.592650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.593097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.593442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.593469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.593862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.594207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.594234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.594613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.594956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.594985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.595247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.595639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.595667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.596010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.596374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.596402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.596792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.597171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.597200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.597592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.597956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.597984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.598396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.598737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.598765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.599063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.599431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.599458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.599722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.600091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.600125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.600475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.600703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.600732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.601151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.601518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.601545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.601917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.602159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.602186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.602546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.602915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.602944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.603316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.603658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.603685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.604050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.604298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.604324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-04-26 15:03:26.604697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-04-26 15:03:26.605074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.605102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.605480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.605849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.605876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.606263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.606628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.606654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.607042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.607395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.607429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.607781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.608100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.608129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.608359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.608682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.608709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.609076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.609445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.609472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.609863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.610212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.610240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.610605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.611022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.611051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.611434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.611793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.611820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.612167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.612411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.612438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.612849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.613230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.613257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.613647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.613932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.613962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.614342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.614718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.614750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.615137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.615491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.615518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.615911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.616278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.616305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.616680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.617048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.617077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.617320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.617691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.617718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.617955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.618310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.618338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.618695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.619031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.619061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.619443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.619784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.619812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.620244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.620564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.620592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.620954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.621318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.621345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.621776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.622140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.622174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.622521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.622762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.622789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.623035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.623417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.623444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.623804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.624187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.624216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.624474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.624860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.624889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-04-26 15:03:26.625262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.625627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-04-26 15:03:26.625655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.189 [2024-04-26 15:03:26.626018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.626392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.626420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-04-26 15:03:26.626633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.627034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.627063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-04-26 15:03:26.627446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.627789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.627816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-04-26 15:03:26.628206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.628441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.628470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-04-26 15:03:26.628872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.629240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.629267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-04-26 15:03:26.629655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.630018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.630047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-04-26 15:03:26.630417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.630782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.630809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-04-26 15:03:26.631231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.631571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.631599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-04-26 15:03:26.631981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.632263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.632291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-04-26 15:03:26.632649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.632984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.633013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-04-26 15:03:26.633273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.633617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.633644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-04-26 15:03:26.634009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.634355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.634382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-04-26 15:03:26.634706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.635027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.635055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-04-26 15:03:26.635422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.635649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.635678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-04-26 15:03:26.636069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.636416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.636443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-04-26 15:03:26.636798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.637205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.637234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-04-26 15:03:26.637621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.637990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.638019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-04-26 15:03:26.638457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.638802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.638828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-04-26 15:03:26.639172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.639584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.639612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-04-26 15:03:26.639972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.640350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.640376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-04-26 15:03:26.640733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.641106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.641135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-04-26 15:03:26.641508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.641849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.641878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-04-26 15:03:26.642249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.642616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-04-26 15:03:26.642643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.643016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.643371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.643399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.643780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.644124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.644153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.644527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.644894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.644923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.645286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.645706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.645732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.645995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.646360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.646388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.646748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.647085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.647114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.647485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.647859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.647887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.648255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.648609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.648636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.648992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.649342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.649369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.649664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.650051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.650079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.650532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.650863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.650891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.651272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.651684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.651710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.652097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.652337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.652367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.652737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.653090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.653120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.653368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.653778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.653805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.654159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.654463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.654490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.654867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.655249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.655278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.655651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.655898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.655928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.656338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.656671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.656698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.657072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.657486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.657513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.657851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.658119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.658149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.658512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.658764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.658794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.659165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.659518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.659545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.659902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.660265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.660293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.660706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.661034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.661062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.661449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.661813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.661850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.662228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.662635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.662662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-04-26 15:03:26.663017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.663369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-04-26 15:03:26.663396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.663773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.664140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.664169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.664523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.664865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.664894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.665244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.665590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.665616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.665989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.666376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.666405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.666779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.667131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.667159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.667408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.667780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.667807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.668176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.668547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.668574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.668947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.669332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.669359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.669733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.670087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.670115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.670464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.670848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.670878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.671267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.671622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.671649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.672010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.672367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.672394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.672768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.673132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.673161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.673397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.673675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.673703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.674084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.674449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.674475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.674816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.675223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.675251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.675629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.675876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.675906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.676219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.676634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.676661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.677028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.677426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.677454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.677833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.678244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.678273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.678651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.679008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.679037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.679409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.679783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.679810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.680070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.680433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.680460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.680831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.681129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.681155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.681520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.681889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.681918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.682299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.682698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.682725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.683109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.683475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.683502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.683899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.684302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.684330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-04-26 15:03:26.684710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-04-26 15:03:26.685053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.685081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.685456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.685821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.685862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.686142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.686508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.686535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.686901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.687279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.687306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.687752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.687995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.688025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.688407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.688765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.688792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.689179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.689553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.689581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.689968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.690313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.690341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.690710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.691076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.691104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.691461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.691824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.691863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.692259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.692626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.692653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.693013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.693383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.693410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.693781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.694189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.694217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.694602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.694871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.694900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.695279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.695623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.695650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.696039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.696280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.696310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.696666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.697019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.697047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.697343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.697745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.697772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.698031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.698342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.698370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.698728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.699087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.699116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.699482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.699762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.699789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.700178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.700472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.700500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.700753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.701137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.701166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.701478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.701861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.701890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.702247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.702693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.702720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.703096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.703465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.703494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.703814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.704206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.704234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.704605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.704860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.704888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.192 qpair failed and we were unable to recover it. 00:26:44.192 [2024-04-26 15:03:26.705268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.192 [2024-04-26 15:03:26.707541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.707613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.707982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.708365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.708394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.708637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.709005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.709034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.709438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.709801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.709828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.710209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.710562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.710589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.711019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.711401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.711429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.711795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.712038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.712069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.712441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.712793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.712820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.713186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.713584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.713612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.714003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.714254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.714280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.714642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.714984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.715012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.715463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.715852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.715880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.716233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.716584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.716611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.716970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.717337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.717364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.717672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.718018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.718051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.718355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.718722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.718749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.719104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.719405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.719434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.719886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.720148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.720178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.720510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.720878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.720913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.721307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.721676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.721703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.722072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.722445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.722473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.722721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.723057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.723086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.723458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.723800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.723828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.724190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.724551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.724579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.724974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.725349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.725376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.725801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.726197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.726225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.726593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.726956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.726986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.727363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.727776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.727803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.728123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.728447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.193 [2024-04-26 15:03:26.728480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.193 qpair failed and we were unable to recover it. 00:26:44.193 [2024-04-26 15:03:26.728880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.729252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.729280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-04-26 15:03:26.729550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.729931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.729959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-04-26 15:03:26.730306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.730691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.730718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-04-26 15:03:26.731090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.731363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.731389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-04-26 15:03:26.731725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.732181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.732209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-04-26 15:03:26.732442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.732849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.732878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-04-26 15:03:26.733255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.733621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.733650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-04-26 15:03:26.734019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.734355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.734383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-04-26 15:03:26.734754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.734981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.735012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-04-26 15:03:26.735259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.735614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.735649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-04-26 15:03:26.736035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.737785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.737858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-04-26 15:03:26.738242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.738644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.738671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-04-26 15:03:26.739040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.739412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.739439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-04-26 15:03:26.739793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.740162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.740193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-04-26 15:03:26.740576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.740922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.740956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-04-26 15:03:26.741333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.741673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.741700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-04-26 15:03:26.742127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.742487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.742515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-04-26 15:03:26.742896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.743266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.743294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-04-26 15:03:26.743663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.744037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.744066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-04-26 15:03:26.745777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.747692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.747750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-04-26 15:03:26.748114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.748530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.748563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-04-26 15:03:26.748798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.749227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.749256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-04-26 15:03:26.749560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.749946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.749976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-04-26 15:03:26.750342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.750712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.750738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.194 qpair failed and we were unable to recover it. 00:26:44.194 [2024-04-26 15:03:26.751024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.194 [2024-04-26 15:03:26.751823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.751908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.752210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.752451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.752484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.752892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.753284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.753312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.753672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.754030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.754061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.754441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.754813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.754851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.755246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.755608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.755636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.756029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.756439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.756467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.756828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.757099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.757128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.757532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.757897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.757926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.758300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.758546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.758576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.758965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.759377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.759404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.759658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.760083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.760111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.760448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.760771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.760798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.761238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.761489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.761519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.761919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.762301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.762327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.762699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.763056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.763084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.763517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.763760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.763786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.764128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.764467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.764494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.764879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.765311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.765338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.765726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.765963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.765995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.766286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.766647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.766673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.767045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.767297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.767326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.767738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.768106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.768135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.768511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.768816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.768853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.769313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.769648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.769675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.769931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.770338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.770365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.770776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.771028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.771056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.771427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.771791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.771818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.195 [2024-04-26 15:03:26.772213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.772568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.195 [2024-04-26 15:03:26.772595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.195 qpair failed and we were unable to recover it. 00:26:44.196 [2024-04-26 15:03:26.772866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.773242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.773271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-04-26 15:03:26.773604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.773978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.774007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-04-26 15:03:26.774384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.774759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.774785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-04-26 15:03:26.775176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.775535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.775562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-04-26 15:03:26.775937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.776310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.776338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-04-26 15:03:26.776699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.777081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.777110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-04-26 15:03:26.777482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.777723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.777750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-04-26 15:03:26.778103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.778474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.778500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-04-26 15:03:26.778862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.779239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.779266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-04-26 15:03:26.779601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.779966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.779995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-04-26 15:03:26.780253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.780619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.780646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-04-26 15:03:26.780994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.781324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.781353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 1231046 Killed "${NVMF_APP[@]}" "$@" 00:26:44.196 [2024-04-26 15:03:26.781731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.782086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.782116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 15:03:26 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:26:44.196 [2024-04-26 15:03:26.782494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 15:03:26 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:44.196 [2024-04-26 15:03:26.782866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.782896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.196 15:03:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 15:03:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:44.196 [2024-04-26 15:03:26.783289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 15:03:26 -- common/autotest_common.sh@10 -- # set +x 00:26:44.196 [2024-04-26 15:03:26.783551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.783579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-04-26 15:03:26.783973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.784345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.784372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-04-26 15:03:26.784749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.785096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.785125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-04-26 15:03:26.785455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.785827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.785869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-04-26 15:03:26.786243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.786498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.786524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-04-26 15:03:26.786799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.787217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.787246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-04-26 15:03:26.787642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.787936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.787966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-04-26 15:03:26.788304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.788736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.788763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-04-26 15:03:26.789101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.789480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.789507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-04-26 15:03:26.789896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.790244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.790272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 [2024-04-26 15:03:26.790574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 15:03:26 -- nvmf/common.sh@470 -- # nvmfpid=1232014 00:26:44.196 [2024-04-26 15:03:26.790981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 [2024-04-26 15:03:26.791011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.196 qpair failed and we were unable to recover it. 00:26:44.196 15:03:26 -- nvmf/common.sh@471 -- # waitforlisten 1232014 00:26:44.196 [2024-04-26 15:03:26.791289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 15:03:26 -- common/autotest_common.sh@817 -- # '[' -z 1232014 ']' 00:26:44.196 [2024-04-26 15:03:26.791562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.196 15:03:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:44.196 [2024-04-26 15:03:26.791598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 15:03:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.197 [2024-04-26 15:03:26.791874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 15:03:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:44.197 15:03:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.197 [2024-04-26 15:03:26.792255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.197 [2024-04-26 15:03:26.792284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 15:03:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:44.197 15:03:26 -- common/autotest_common.sh@10 -- # set +x 00:26:44.197 [2024-04-26 15:03:26.792676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.793126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.793155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-04-26 15:03:26.793535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.793907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.793937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-04-26 15:03:26.794336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.794706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.794735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-04-26 15:03:26.795229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.795573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.795602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-04-26 15:03:26.795989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.796360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.796388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-04-26 15:03:26.796642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.796930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.796960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-04-26 15:03:26.797356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.797750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.797779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-04-26 15:03:26.798084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.798376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.798405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-04-26 15:03:26.798784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.799162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.799192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-04-26 15:03:26.799567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.799825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.799868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-04-26 15:03:26.800237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.800606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.800635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-04-26 15:03:26.801020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.801385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.801412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-04-26 15:03:26.801660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.802040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.802070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-04-26 15:03:26.802440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.802670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.802700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-04-26 15:03:26.803090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.803463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.803491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-04-26 15:03:26.803879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.804282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.804308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-04-26 15:03:26.804559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.804920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.804949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-04-26 15:03:26.805399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.805813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.805850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-04-26 15:03:26.806107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.806391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.806419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-04-26 15:03:26.806757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.807124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.807154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-04-26 15:03:26.807410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.807648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.807675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-04-26 15:03:26.807917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.808191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.808218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-04-26 15:03:26.808591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.808940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.808970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-04-26 15:03:26.809208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.809576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.809602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-04-26 15:03:26.809962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.810211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.810241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.197 [2024-04-26 15:03:26.810608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.810943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.197 [2024-04-26 15:03:26.810971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.197 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.811347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.811679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.811707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.812214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.812458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.812485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.812831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.813273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.813300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.813693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.814031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.814059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.814288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.814677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.814705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.815129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.815369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.815395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.815670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.816031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.816064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.816224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.816570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.816597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.816994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.817358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.817386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.817759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.817962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.817990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.818344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.818709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.818737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.819006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.819374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.819402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.819759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.820192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.820220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.820580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.820932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.820962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.821222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.821567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.821594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.821984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.822369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.822397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.822792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.822963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.822990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.823422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.823670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.823700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.824080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.824482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.824508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.824799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.825202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.825231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.825558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.825913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.825943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.826196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.826490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.826519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.826896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.827269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.827297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.827662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.828036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.828064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.828450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.828715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.828745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.829101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.829475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.829504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.829860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.830124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.830152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.198 [2024-04-26 15:03:26.830507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.830880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.198 [2024-04-26 15:03:26.830907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.198 qpair failed and we were unable to recover it. 00:26:44.199 [2024-04-26 15:03:26.831276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-04-26 15:03:26.831528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-04-26 15:03:26.831554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-04-26 15:03:26.831952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-04-26 15:03:26.832278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-04-26 15:03:26.832307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-04-26 15:03:26.832558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-04-26 15:03:26.832914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-04-26 15:03:26.832944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.199 [2024-04-26 15:03:26.833322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-04-26 15:03:26.833644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.199 [2024-04-26 15:03:26.833673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.199 qpair failed and we were unable to recover it. 00:26:44.509 [2024-04-26 15:03:26.834066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.834315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.834342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-04-26 15:03:26.834607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.834930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.834958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-04-26 15:03:26.835331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.835691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.835719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-04-26 15:03:26.836174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.836443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.836470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-04-26 15:03:26.836859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.837314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.837342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-04-26 15:03:26.837693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.838046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.838075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-04-26 15:03:26.838464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.838727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.838754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-04-26 15:03:26.839132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.839497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.839525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-04-26 15:03:26.839880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.840243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.840270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-04-26 15:03:26.840539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.840924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.840966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-04-26 15:03:26.841373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.841626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.841652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-04-26 15:03:26.842030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.842430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.842458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-04-26 15:03:26.842860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.843103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.843130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-04-26 15:03:26.843554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.843934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.843963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-04-26 15:03:26.844403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.844641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.844672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-04-26 15:03:26.844915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-04-26 15:03:26.845201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.845228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.845473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.845830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.845874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.846238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.846601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.846628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.847053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.847439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.847466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.847811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.848180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.848210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.848469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.848484] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:44.510 [2024-04-26 15:03:26.848545] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.510 [2024-04-26 15:03:26.848757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.848785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.849214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.849586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.849613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.850027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.850426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.850454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.850874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.851266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.851294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.851676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.851958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.851988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.852359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.852737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.852765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.853225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.853444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.853474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.853861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.854243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.854270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.854650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.854901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.854945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.855363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.855651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.855679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.855859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.856243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.856271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.856423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.856698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.856726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.857124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.857506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.857533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.857910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.858186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.858214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.858498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.858878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.858907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.859290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.859643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.859670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.859923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.860406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.860434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.860809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.860975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.861003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.861352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.861733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.861767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.862128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.862367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.862395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.862736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.862960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.862989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.863394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.863649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.863680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.863954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.864368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.864397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-04-26 15:03:26.864771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-04-26 15:03:26.865142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.865172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.865574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.865920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.865948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.866395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.866609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.866635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.866925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.867279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.867307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.867698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.868053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.868081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.868517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.868894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.868922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.869218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.869576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.869602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.870058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.870437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.870464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.870861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.871158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.871184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.871582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.871945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.871975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.872407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.872760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.872787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.873171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.873466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.873493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.873890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.874365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.874392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.874797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.875127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.875156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.875607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.875962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.875991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.876394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.876757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.876783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.877032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.877438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.877465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.877811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.878028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.878057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.878353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.878584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.878610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.879004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.879384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.879411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.879772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.880149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.880178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.880446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.880830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.880871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.881291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.881675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.881702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.882040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.882416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.882444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.882617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.883030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.883058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.883434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.883687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.883714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.884088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.884501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.884529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.884892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.885234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.885261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.511 [2024-04-26 15:03:26.885578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.885939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.885969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.886244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.886519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-04-26 15:03:26.886546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-04-26 15:03:26.886863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.887289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.887316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.887555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.887805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.887833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.888107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.888514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.888542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.888919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.889202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.889230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.889623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.889997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.890026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.890410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.890688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.890716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.891167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.891515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.891543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.891931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.892314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.892341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.892700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.893091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.893120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.893485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.893866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.893894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.894314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.894673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.894699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.894977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.895326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.895353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.895688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.895960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.895989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.896367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.896741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.896767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.897139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.897476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.897503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.897811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.898089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.898119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.898544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.898908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.898939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.899346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.899772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.899801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.900184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.900584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.900612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.900987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.901368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.901397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.901766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.902131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.902160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.902539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.902938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.902967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.903360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.903717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.903744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.904188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.904559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.904587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.905017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.905454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.905481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.905865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.906276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.906305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.906674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.906932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.906960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.907345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.907666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.907693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-04-26 15:03:26.908079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.908322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-04-26 15:03:26.908351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.513 [2024-04-26 15:03:26.908724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.908982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.909013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-04-26 15:03:26.909397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.909653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.909679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-04-26 15:03:26.910088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.910460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.910488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-04-26 15:03:26.910836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.911199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.911227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-04-26 15:03:26.911603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.912013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.912041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-04-26 15:03:26.912421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.912754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.912781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-04-26 15:03:26.913126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.913486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.913513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-04-26 15:03:26.913826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.914248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.914275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-04-26 15:03:26.914536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.914873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.914902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-04-26 15:03:26.915059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.915438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.915466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-04-26 15:03:26.915829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.916229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.916257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-04-26 15:03:26.916617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.916986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.917017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-04-26 15:03:26.917359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.917728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.917756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-04-26 15:03:26.918086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.918447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.918476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-04-26 15:03:26.918857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.919131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.919159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-04-26 15:03:26.919565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.919993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.920022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-04-26 15:03:26.920416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.920782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.920810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-04-26 15:03:26.921193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.921401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.921433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-04-26 15:03:26.921874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.922239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.922265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-04-26 15:03:26.922625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.922975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.923004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-04-26 15:03:26.923253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.923580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.923608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-04-26 15:03:26.923877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.924260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.924289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-04-26 15:03:26.924543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.924781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.924807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-04-26 15:03:26.925243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.925612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.925639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-04-26 15:03:26.926005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-04-26 15:03:26.926372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.926400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.926769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.927131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.927161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.927545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.927892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.927921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.928382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.928751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.928779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.929155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.929513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.929540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.929788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.930160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.930189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.930487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.930734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.930761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.931121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.931497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.931525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.931909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.932132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.932162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.932530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.932902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.932931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.933222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.933622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.933650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.934007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.934429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.934457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.934879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.935206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.935234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.935662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.935979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.936006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.936365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.936698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.936727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.937111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.937482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.937509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.937931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.938314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.938343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.938718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.939083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.939111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.939484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.939853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.939882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.940287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.940623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.940650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.941034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.941449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.941480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.941778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.942134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.942163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.942574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.942924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.942953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.943309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.943526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.943553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.943950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.944228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.944255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.944614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.944613] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:44.514 [2024-04-26 15:03:26.944972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.945000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.945368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.945743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.945771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.946135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.946499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.946527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.946913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.947302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.947329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.947693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.948029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.948059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.948453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.948813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.948849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.949222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.949557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.949584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-04-26 15:03:26.949866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-04-26 15:03:26.950277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.950304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.950662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.951039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.951069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.951481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.951855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.951884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.952279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.952621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.952649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.952910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.953274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.953301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.953694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.954037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.954066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.954326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.954687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.954715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.955076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.955443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.955470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.955858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.956230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.956258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.956634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.956986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.957014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.957360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.957786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.957814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.958220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.958592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.958620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.958891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.959263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.959290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.959728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.960023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.960050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.960417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.960769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.960795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.961234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.961599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.961625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.962030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.962431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.962459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.962702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.962998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.963027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.963405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.963768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.963796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.964191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.964626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.964653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.965047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.965319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.965348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.965725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.966035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.966063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.966447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.966795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.966822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.967036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.967440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.967467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.967861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.968247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.968274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.968638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.968977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.969005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.969391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.969646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.969675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.970033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.970383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.970410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.970803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.971200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.971228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-04-26 15:03:26.971598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.971931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-04-26 15:03:26.971960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.972334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.972693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.972720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.972973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.973257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.973284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.973662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.974032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.974061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.974334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.974715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.974741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.975107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.975477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.975505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.975924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.976185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.976215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.976613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.976968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.976996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.977366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.977702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.977730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.978110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.978342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.978368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.978748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.978995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.979022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.979427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.979775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.979802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.980237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.980477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.980504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.980723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.981072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.981101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.981468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.981832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.981870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.982219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.982510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.982537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.982832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.983197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.983226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.983451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.983814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.983850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.984198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.984555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.984582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.984831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.985225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.985252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.985613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.985760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.985790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.986161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.986394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.986420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.986759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.987098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.987128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.987508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.987758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.987786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.988155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.988312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.988344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.988585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.988954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.988982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.989358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.989607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.989633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.990008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.990378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.990404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.990784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.991178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.991206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-04-26 15:03:26.991455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-04-26 15:03:26.991716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:26.991743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:26.992123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:26.992474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:26.992502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:26.992726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:26.993059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:26.993087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:26.993460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:26.993857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:26.993894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:26.994260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:26.994638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:26.994664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:26.994926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:26.995270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:26.995297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:26.995684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:26.996036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:26.996064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:26.996322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:26.996665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:26.996693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:26.997062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:26.997433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:26.997462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:26.997729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:26.997988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:26.998017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:26.998402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:26.998787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:26.998815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:26.999189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:26.999552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:26.999579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:26.999939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.000327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.000356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:27.000642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.000898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.000932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:27.001337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.001645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.001673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:27.002015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.002374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.002401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:27.002659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.003014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.003041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:27.003395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.003647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.003676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:27.004126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.004499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.004525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:27.004801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.005199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.005226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:27.005593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.005973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.006002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:27.006363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.006787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.006814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:27.007049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.007378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.007405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:27.007764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.008110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.008146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:27.008584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.008936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.008965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:27.009220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.009595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.009624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:27.009890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.010271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.010300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:27.010694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.011123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.011151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:27.011394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.011796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.011823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:27.012157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.012531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.012559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:27.012919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.013282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.013308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:27.013685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.014074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-04-26 15:03:27.014103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-04-26 15:03:27.014536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.014923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.014951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-04-26 15:03:27.015324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.015696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.015728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-04-26 15:03:27.016098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.016461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.016487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-04-26 15:03:27.016876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.017282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.017310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-04-26 15:03:27.017698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.018122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.018150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-04-26 15:03:27.018531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.018906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.018934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-04-26 15:03:27.019309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.019735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.019761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-04-26 15:03:27.020115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.020472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.020500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-04-26 15:03:27.020886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.021237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.021263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-04-26 15:03:27.021507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.021743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.021769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-04-26 15:03:27.022086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.022487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.022515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-04-26 15:03:27.022895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.023297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.023325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-04-26 15:03:27.023764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.024135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.024163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-04-26 15:03:27.024405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.024627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.024657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-04-26 15:03:27.024919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.025296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.025323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-04-26 15:03:27.025695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.026061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.026089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-04-26 15:03:27.026505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.026869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.026898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-04-26 15:03:27.027288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.027640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.027666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-04-26 15:03:27.028017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.028395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.028423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-04-26 15:03:27.028790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.028987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.029018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-04-26 15:03:27.029411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.029775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.029802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-04-26 15:03:27.030214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.030442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.030471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-04-26 15:03:27.030867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.031210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.031237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-04-26 15:03:27.031602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.031971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.032000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-04-26 15:03:27.032383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.032760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.032787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-04-26 15:03:27.033137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.033508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-04-26 15:03:27.033536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.033790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.034169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.034199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.034592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.034930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.034959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.035309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.035661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.035689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.036037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.036281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.036311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.036689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.037026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.037056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.037444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.037890] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.519 [2024-04-26 15:03:27.037920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.037943] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.519 [2024-04-26 15:03:27.037954] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:44.519 [2024-04-26 15:03:27.037954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b9[2024-04-26 15:03:27.037961] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:44.519 0 with addr=10.0.0.2, port=4420 00:26:44.519 [2024-04-26 15:03:27.037971] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.038212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.038214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:44.519 [2024-04-26 15:03:27.038309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:44.519 [2024-04-26 15:03:27.038473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:44.519 [2024-04-26 15:03:27.038575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.038602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 [2024-04-26 15:03:27.038473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.038894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.039280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.039308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.039704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.040088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.040118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.040476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.040821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.040862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.041279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.041657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.041686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.041963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.042235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.042261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.042622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.042982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.043010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.043259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.043670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.043704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.044086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.044442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.044470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.044760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.045033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.045062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.045297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.045535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.045566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.045969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.046348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.046377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.046650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.047028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.047057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.047389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.047714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.047740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.048146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.048511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.048538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.048778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.049115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.049144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.049571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.049829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.049865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.050255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.050596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.050629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.050987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.051369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.051397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.051849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.052218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.052245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.052513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.052659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.052686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.053141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.053382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.053411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.053658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.054034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.054062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-04-26 15:03:27.054422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.054665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-04-26 15:03:27.054695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.055002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.055354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.055382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.055653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.056036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.056065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.056293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.056660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.056687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.057087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.057487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.057520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.057776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.058195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.058223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.058592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.058812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.058856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.059222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.059458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.059484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.059746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.060002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.060031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.060266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.060495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.060525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.060920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.061293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.061320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.061560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.061933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.061962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.062355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.062592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.062618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.062989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.063232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.063259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.063482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.063728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.063762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.064161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.064529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.064557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.064955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.065339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.065366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.065724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.065979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.066008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.066387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.066731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.066758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.067114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.067447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.067476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.067770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.068012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.068042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.068295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.068687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.068714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.069079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.069433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.069460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.069858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.070221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.070248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.070468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.070730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.070757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.071124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.071475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.071504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.071724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.071989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.072020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.072134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.072415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.072442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.072661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.072901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.072929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.073196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.073560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.073589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.073982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.074318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.074346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-04-26 15:03:27.074690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-04-26 15:03:27.075050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.075078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.075467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.075861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.075890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.076298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.076578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.076604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.076824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.077087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.077115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.077479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.077848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.077877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.078242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.078511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.078538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.078902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.079242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.079269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.079639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.079854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.079884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.080243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.080602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.080630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.081001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.081364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.081390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.081750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.082113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.082140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.082368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.082753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.082780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.083063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.083439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.083466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.083913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.084242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.084275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.084632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.084879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.084906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.085049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.085275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.085302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.085658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.085870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.085898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.086284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.086638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.086664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.087052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.087393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.087419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.087802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.088219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.088248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.088656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.089037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.089067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.089336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.089721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.089748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.090002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.090217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.090244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.090617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.090978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.091005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.091407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.091657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.091684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.092062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.092434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.092461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.092714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.092933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.092962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.093198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.093572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.093599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.093941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.094290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.094317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.094631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.095005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.095034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.095434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.095818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.095855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-04-26 15:03:27.096113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.096459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-04-26 15:03:27.096488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-04-26 15:03:27.096870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.097256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.097292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-04-26 15:03:27.097679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.097933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.097962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-04-26 15:03:27.098113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.098488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.098517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-04-26 15:03:27.098747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.099076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.099105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-04-26 15:03:27.099369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.099714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.099742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-04-26 15:03:27.100099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.100306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.100333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-04-26 15:03:27.100443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.100708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.100736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-04-26 15:03:27.101113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.101471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.101498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-04-26 15:03:27.101880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.102265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.102293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-04-26 15:03:27.102645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.103008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.103037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-04-26 15:03:27.103439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.103810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.103858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-04-26 15:03:27.104240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.104480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.104506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-04-26 15:03:27.104881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.105218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.105247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-04-26 15:03:27.105624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.106003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.106040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-04-26 15:03:27.106296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.106651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.106677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-04-26 15:03:27.107038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.107421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.107448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-04-26 15:03:27.107924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.108294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.108321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-04-26 15:03:27.108725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.108950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.108978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-04-26 15:03:27.109274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.109634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.109660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-04-26 15:03:27.109900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.110340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.110367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-04-26 15:03:27.110747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.110970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.110998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-04-26 15:03:27.111435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.111799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.111826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-04-26 15:03:27.112092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.112441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-04-26 15:03:27.112469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.112830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.113229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.113255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.113516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.113769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.113798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.114199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.114568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.114595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.114833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.115205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.115231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.115446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.115538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.115563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.115853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.116216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.116243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.116686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.116908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.116935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.117332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.117684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.117710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.118090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.118460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.118486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.118857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.119223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.119250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.119628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.119881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.119909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.120339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.120705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.120732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.121047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.121383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.121410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.121775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.122156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.122184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.122558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.122929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.122957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.123316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.123691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.123718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.124169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.124544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.124571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.124954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.125328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.125355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.125734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.126116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.126144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.126488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.126823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.126860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.127220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.127574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.127601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.127976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.128198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.128223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.128633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.128858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.128886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.129270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.129617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.129643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.130099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.130332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.130360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.130713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.131099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.131128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.131527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.131771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.131799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.132026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.132174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.132204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.132311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.132541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.132568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.132945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.133320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.133347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.133708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.134061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.134090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.134469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.134819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.134856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.135277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.135538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.135564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.135831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.136301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.136329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-04-26 15:03:27.136439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-04-26 15:03:27.136762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.136789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.137175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.137543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.137569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.137798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.138180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.138209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.138436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.138792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.138819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.139053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.139432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.139460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.139857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.140224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.140251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.140476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.140704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.140731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.141099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.141476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.141507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.141872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.142205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.142233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.142453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.142693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.142723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.143173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.143536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.143562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.143941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.144292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.144319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.144578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.144926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.144954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.145368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.145732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.145758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.146126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.146405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.146432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.146812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.147179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.147213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.147550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.147771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.147798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.148106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.148474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.148501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.148874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.149153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.149179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.149440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.149818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.149856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.150075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.150355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.150382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.150738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.151130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.151158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.151431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.151781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.151808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.152248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.152626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.152653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.152867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.153260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.153287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.153518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.153752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.153783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.154237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.154594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.154621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.155038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.155424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.155451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.155811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.156202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.156229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.156600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.156862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.156892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.157255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.157630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.157656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.157922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.158315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.158342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.158726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.159122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.159149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.159365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.159774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.159801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.160040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.160281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.160308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.160642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.161009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-04-26 15:03:27.161044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-04-26 15:03:27.161426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-04-26 15:03:27.161818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-04-26 15:03:27.161855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-04-26 15:03:27.162075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-04-26 15:03:27.162348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-04-26 15:03:27.162375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.794 [2024-04-26 15:03:27.162702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-04-26 15:03:27.162919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-04-26 15:03:27.162951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-04-26 15:03:27.163327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-04-26 15:03:27.163679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-04-26 15:03:27.163706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-04-26 15:03:27.164074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-04-26 15:03:27.164406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-04-26 15:03:27.164433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-04-26 15:03:27.164798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-04-26 15:03:27.165167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-04-26 15:03:27.165196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-04-26 15:03:27.165573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-04-26 15:03:27.165936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-04-26 15:03:27.165964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-04-26 15:03:27.166348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-04-26 15:03:27.166558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-04-26 15:03:27.166585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-04-26 15:03:27.166962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-04-26 15:03:27.167335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-04-26 15:03:27.167363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-04-26 15:03:27.167766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-04-26 15:03:27.168111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-04-26 15:03:27.168145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-04-26 15:03:27.168505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-04-26 15:03:27.168882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-04-26 15:03:27.168917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-04-26 15:03:27.169383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-04-26 15:03:27.169719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-04-26 15:03:27.169747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-04-26 15:03:27.169929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-04-26 15:03:27.170315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-04-26 15:03:27.170343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-04-26 15:03:27.170730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-04-26 15:03:27.171083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-04-26 15:03:27.171110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.794 qpair failed and we were unable to recover it. 00:26:44.794 [2024-04-26 15:03:27.171490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.794 [2024-04-26 15:03:27.171873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.171902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.172126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.172497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.172524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.172743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.172991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.173020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.173412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.173665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.173694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.174078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.174482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.174508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.174730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.175064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.175093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.175255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.175527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.175554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.175938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.176313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.176340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.176581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.176799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.176826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.177216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.177596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.177622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.177885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.178102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.178129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.178491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.178743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.178771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.179167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.179528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.179556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.179924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.180158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.180184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.180611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.180965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.180993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.181231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.181467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.181495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.181855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.182218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.182245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.182633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.183023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.183051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.183279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.183662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.183689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.183811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.184038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.184067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.184329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.184700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.184726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.185132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.185507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.185534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.185764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.186140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.186169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.186560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.186921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.186949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.187405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.187694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.187721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.188113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.188447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.188474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.188700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.188963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.188991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.189217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.189595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.189622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.795 qpair failed and we were unable to recover it. 00:26:44.795 [2024-04-26 15:03:27.190013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.795 [2024-04-26 15:03:27.190236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.190265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.190516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.190890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.190919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.191323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.191677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.191705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.192021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.192405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.192433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.192812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.193241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.193269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.193641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.194025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.194054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.194424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.194672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.194698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.195078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.195443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.195470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.195870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.196240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.196268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.196533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.196784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.196810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.197082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.197453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.197479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.197851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.198230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.198257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.198365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.198455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.198481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.198905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.199131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.199157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.199379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.199746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.199772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.200147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.200531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.200558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.200804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.201218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.201246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.201625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.201847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.201874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.202141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.202394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.202423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.202681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.203002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.203031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.203429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.203800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.203828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.204224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.204473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.204499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.204851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.205221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.205248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.205491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.205722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.205750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.206114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.206474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.206500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.206871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.207240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.207267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.207458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.207876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.207905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.208256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.208629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.796 [2024-04-26 15:03:27.208656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.796 qpair failed and we were unable to recover it. 00:26:44.796 [2024-04-26 15:03:27.209034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.209454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.209481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.209877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.210281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.210308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.210542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.210901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.210931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.211076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.211435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.211461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.211919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.212186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.212215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.212608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.212951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.212981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.213371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.213631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.213659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.214010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.214193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.214220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.214622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.214982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.215011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.215401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.215646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.215672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.216074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.216425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.216452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.216815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.217139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.217167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.217398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.217766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.217793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.218175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.218406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.218436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.218817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.219170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.219197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.219416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.219832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.219883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.220260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.220518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.220545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.220935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.221324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.221350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.221776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.221995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.222022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.222248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.222629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.222656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.223126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.223385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.223412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.223797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.224194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.224223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.224488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.224733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.224761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.225026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.225268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.225296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.225612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.225982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.226010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.226334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.226703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.226730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.227094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.227473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.227501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.227929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.228152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.228179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.797 qpair failed and we were unable to recover it. 00:26:44.797 [2024-04-26 15:03:27.228598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.797 [2024-04-26 15:03:27.228950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.228979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.229369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.229620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.229646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.229983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.230321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.230349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.230774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.231173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.231201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.231586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.231962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.231992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.232380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.232752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.232779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.233018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.233392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.233419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.233773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.234170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.234200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.234580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.234796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.234822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.235116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.235532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.235560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.235937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.236162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.236189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.236435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.236802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.236830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.237160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.237529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.237556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.237798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.238204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.238233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.238595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.239000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.239028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.239431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.239804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.239832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.240214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.240604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.240632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.240868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.241241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.241268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.241656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.241868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.241897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.242316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.242684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.242711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.242972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.243348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.243374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.243630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.243979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.244008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.244374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.244745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.244772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.244994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.245364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.245391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.245776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.246131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.246160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.246534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.246887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.246916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.247279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.247494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.247520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.247729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.248098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.248127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.798 qpair failed and we were unable to recover it. 00:26:44.798 [2024-04-26 15:03:27.248489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.248701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.798 [2024-04-26 15:03:27.248729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-04-26 15:03:27.249085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.249337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.249364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-04-26 15:03:27.249756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.250130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.250159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-04-26 15:03:27.250526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.250894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.250923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-04-26 15:03:27.251310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.251713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.251740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-04-26 15:03:27.252143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.252513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.252540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-04-26 15:03:27.252913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.253270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.253299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-04-26 15:03:27.253666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.254037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.254066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-04-26 15:03:27.254445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.254657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.254686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-04-26 15:03:27.254937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.255173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.255200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-04-26 15:03:27.255465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.255849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.255878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-04-26 15:03:27.256255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.256502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.256529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-04-26 15:03:27.256905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.257250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.257277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-04-26 15:03:27.257689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.258035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.258072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-04-26 15:03:27.258414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.258644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.258677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-04-26 15:03:27.258920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.259321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.259349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-04-26 15:03:27.259574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.259805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.259833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-04-26 15:03:27.260105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.260477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.260504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-04-26 15:03:27.260883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.261277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.261304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-04-26 15:03:27.261645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.262017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.262047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-04-26 15:03:27.262292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.262642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.262669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-04-26 15:03:27.263053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.263423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.263451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-04-26 15:03:27.263810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.264055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.264084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.799 qpair failed and we were unable to recover it. 00:26:44.799 [2024-04-26 15:03:27.264446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.264806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.799 [2024-04-26 15:03:27.264832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.265214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.265585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.265619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.265866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.266308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.266335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.266700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.266937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.266965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.267379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.267731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.267758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.268138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.268512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.268539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.268947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.269302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.269329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.269786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.269895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.269921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.270172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.270507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.270535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.270972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.271416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.271444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.271719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.271962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.271990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.272380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.272592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.272624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.272985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.273354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.273380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.273610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.273869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.273898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.274283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.274656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.274684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.275073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.275372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.275399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.275693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.276071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.276100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.276296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.276644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.276671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.276912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.277367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.277393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.277683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.278028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.278056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.278454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.278671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.278697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.279078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.279325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.279362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.279740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.280000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.280027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.280253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.280649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.280676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.281054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.281414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.281441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.281655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.282037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.282065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.282342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.282677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.282703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.283088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.283470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.283497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.800 [2024-04-26 15:03:27.283890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.284220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.800 [2024-04-26 15:03:27.284247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.800 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.284704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.285087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.285115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.285356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.285781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.285808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.286075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.286452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.286479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.286870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.287253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.287280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.287656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.288010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.288037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.288438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.288817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.288854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.289208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.289556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.289583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.289930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.290331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.290359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.290602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.290893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.290922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.291167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.291583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.291610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.291994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.292334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.292362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.292792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.293088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.293117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.293356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.293706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.293734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.293985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.294311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.294340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.294743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.295203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.295232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.295606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.295866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.295894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.296270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.296590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.296617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.296856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.297284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.297311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.297577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.297834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.297873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.298260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.298484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.298510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.298672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.298875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.298904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.299177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.299545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.299571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.299956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.300296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.300322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.300662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.301018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.301046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.301428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.301771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.301798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.302204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.302562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.302589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.302974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.303355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.303383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.801 qpair failed and we were unable to recover it. 00:26:44.801 [2024-04-26 15:03:27.303633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.303862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.801 [2024-04-26 15:03:27.303893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.304172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.304532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.304560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.304795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.305163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.305191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.305454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.305829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.305869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.306107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.306458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.306485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.306885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.307272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.307298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.307697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.307934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.307961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.308315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.308659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.308688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.309076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.309425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.309452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.309846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.310184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.310211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.310568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.310933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.310962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.311350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.311740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.311767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.312025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.312397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.312423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.312794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.313163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.313190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.313548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.313790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.313819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.314065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.314443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.314469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.314849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.315083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.315109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.315471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.315827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.315866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.316301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.316517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.316543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.316937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.317284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.317312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.317425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.317655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.317683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.318036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.318284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.318310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.318684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.318977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.319007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.319354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.319637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.319663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.320016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.320394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.320422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.320818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.321066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.321094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.321541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.321910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.321939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.322211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.322584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.322610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.323017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.323240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.802 [2024-04-26 15:03:27.323266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.802 qpair failed and we were unable to recover it. 00:26:44.802 [2024-04-26 15:03:27.323657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.324037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.324065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-04-26 15:03:27.324463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.324712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.324738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-04-26 15:03:27.325108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.325457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.325484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-04-26 15:03:27.325828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.326120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.326147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-04-26 15:03:27.326532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.326898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.326928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-04-26 15:03:27.327286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.327510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.327540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-04-26 15:03:27.327789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.328156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.328184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-04-26 15:03:27.328591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.328930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.328958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-04-26 15:03:27.329348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.329712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.329738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-04-26 15:03:27.329965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.330135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.330164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-04-26 15:03:27.330394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.330616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.330642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-04-26 15:03:27.331068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.331437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.331464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-04-26 15:03:27.331827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.331942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.331970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-04-26 15:03:27.332339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.332593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.332623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-04-26 15:03:27.332988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.333366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.333393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-04-26 15:03:27.333717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.334087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.334115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-04-26 15:03:27.334501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.334863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.334893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-04-26 15:03:27.335196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.335551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.335579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-04-26 15:03:27.335945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.336159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.336186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-04-26 15:03:27.336487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.336725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.336752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-04-26 15:03:27.337146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.337508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.337535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-04-26 15:03:27.337773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.338050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.338080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-04-26 15:03:27.338444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.338751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.338778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-04-26 15:03:27.339018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.339387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.339414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-04-26 15:03:27.339792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.340169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.803 [2024-04-26 15:03:27.340197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.803 qpair failed and we were unable to recover it. 00:26:44.803 [2024-04-26 15:03:27.340547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.340663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.340694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.341086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.341346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.341373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.341775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.342203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.342231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.342599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.342925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.342953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.343355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.343706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.343732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.344118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.344408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.344435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.344655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.345014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.345043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.345267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.345691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.345719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.346036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.346383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.346411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.346653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.347013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.347043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.347263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.347475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.347502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.347894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.348137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.348164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.348582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.348831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.348870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.349135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.349579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.349606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.350026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.350373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.350401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.350788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.351044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.351072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.351491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.351731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.351757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.352014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.352229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.352255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.352506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.352741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.352768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.353148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.353496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.353523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.353925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.354291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.354319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.354430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.354755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.354782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.355172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.355525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.355553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.356005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.356433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.356459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.356877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.357244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.357272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.357666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.358116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.358144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.358511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.358862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.358892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.359276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.359490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.359517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.804 qpair failed and we were unable to recover it. 00:26:44.804 [2024-04-26 15:03:27.359657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.804 [2024-04-26 15:03:27.360056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.360084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.360470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.360852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.360880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.361127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.361381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.361411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.361669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.362048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.362076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.362462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.362692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.362723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.362898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.363241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.363267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.363671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.364026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.364054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.364289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.364675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.364702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.365039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.365253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.365280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.365529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.365936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.365965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.366332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.366580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.366606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.367022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.367389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.367415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.367799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.368143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.368171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.368420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.368656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.368682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.369035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.369379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.369411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.369832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.370038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.370065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.370372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.370750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.370779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.371170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.371387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.371414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.371686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.371932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.371963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.372338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.372692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.372719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.373165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.373515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.373542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.373924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.374309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.374336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.374561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.374952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.374982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.375386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.375763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.375789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.376147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.376515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.376547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.376811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.376923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.376952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.377342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.377682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.377709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.378135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.378373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.378399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.805 qpair failed and we were unable to recover it. 00:26:44.805 [2024-04-26 15:03:27.378785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.805 [2024-04-26 15:03:27.379164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.379193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.379426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.379698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.379726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.379961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.380180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.380206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.380601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.380970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.380998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.381396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.381694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.381720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.381937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.382348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.382374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.382823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.383064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.383105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.383359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.383728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.383755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.384004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.384400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.384427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.384695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.385083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.385111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.385539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.385817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.385854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.386226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.386571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.386598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.386826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.387114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.387144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.387241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.387643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.387670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.388038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.388392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.388427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.388790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.389002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.389031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.389142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.389483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.389515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.389880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.390150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.390177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.390413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.390759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.390785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.391171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.391570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.391597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.392001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.392324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.392351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.392720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.393088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.393116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.393496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.393870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.393899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.394129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.394490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.394517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.394802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.395088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.395116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.395378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.395745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.395772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.396148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.396542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.396568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.396947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.397316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.397342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.806 qpair failed and we were unable to recover it. 00:26:44.806 [2024-04-26 15:03:27.397662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.806 [2024-04-26 15:03:27.398032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.398060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.398438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.398821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.398872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.399172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.399524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.399550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.399931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.400294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.400322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.400546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.400923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.400952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.401328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.401576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.401604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.401993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.402202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.402228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.402682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.402924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.402952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.403351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.403720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.403746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.403972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.404279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.404306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.404523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.404901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.404929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.405312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.405553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.405579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.405931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.406299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.406326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.406699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.407078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.407107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.407372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.407758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.407785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.408219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.408436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.408462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.408779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.409142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.409172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.409530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.409893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.409921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.410265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.410636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.410663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.411038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.411279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.411306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.411550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.411941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.411969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.412393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.412726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.412753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.412982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.413202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.413229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.413519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.413883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.413911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.414281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.414651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.414678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.414897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.415135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.415162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.415527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.415740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.415765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.416196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.416556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.416583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.807 [2024-04-26 15:03:27.416962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.417380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.807 [2024-04-26 15:03:27.417407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.807 qpair failed and we were unable to recover it. 00:26:44.808 [2024-04-26 15:03:27.417783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.417992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.418020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-04-26 15:03:27.418183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.418587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.418614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-04-26 15:03:27.418850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.419129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.419155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-04-26 15:03:27.419531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.419754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.419779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-04-26 15:03:27.420164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.420607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.420634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-04-26 15:03:27.420904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.421282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.421309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-04-26 15:03:27.421574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.421952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.421981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-04-26 15:03:27.422375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.422797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.422824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-04-26 15:03:27.423260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.423472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.423499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-04-26 15:03:27.423860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.424125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.424155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-04-26 15:03:27.424529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.424910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.424939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-04-26 15:03:27.425345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.425727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.425754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-04-26 15:03:27.426128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.426375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.426401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-04-26 15:03:27.426782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.426991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.427019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-04-26 15:03:27.427407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.427502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.427529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-04-26 15:03:27.427826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.428194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.428222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-04-26 15:03:27.428581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.428832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.428884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-04-26 15:03:27.429287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.429665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.429692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-04-26 15:03:27.429910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.430326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.430352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-04-26 15:03:27.430722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.430977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.431005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-04-26 15:03:27.431400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.431755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.431782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-04-26 15:03:27.432031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.432424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.432450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-04-26 15:03:27.432831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.433284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.433311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-04-26 15:03:27.433681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.434036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.434066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.808 qpair failed and we were unable to recover it. 00:26:44.808 [2024-04-26 15:03:27.434293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.808 [2024-04-26 15:03:27.434545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.434572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-04-26 15:03:27.434751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.434951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.434978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-04-26 15:03:27.435398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.435610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.435636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-04-26 15:03:27.435995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.436360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.436386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-04-26 15:03:27.436754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.437157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.437184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-04-26 15:03:27.437449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.437823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.437862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-04-26 15:03:27.438272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.438652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.438680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-04-26 15:03:27.439072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.439513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.439539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-04-26 15:03:27.439807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.440188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.440215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-04-26 15:03:27.440592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.440708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.440737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-04-26 15:03:27.441061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.441400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.441428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-04-26 15:03:27.441851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.442118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.442144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-04-26 15:03:27.442375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.442738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.442764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-04-26 15:03:27.443035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.443406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.443433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-04-26 15:03:27.443833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.444110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.444140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-04-26 15:03:27.444533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.444880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.444909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-04-26 15:03:27.445355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.445723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.445750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-04-26 15:03:27.446126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.446338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.446364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-04-26 15:03:27.446724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.447058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.447086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-04-26 15:03:27.447478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.447863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.447891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-04-26 15:03:27.448269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.448641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.448668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-04-26 15:03:27.449062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.449414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.449441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-04-26 15:03:27.449667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.450017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.450045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-04-26 15:03:27.450245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.450450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.809 [2024-04-26 15:03:27.450477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:44.809 qpair failed and we were unable to recover it. 00:26:44.809 [2024-04-26 15:03:27.450864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-04-26 15:03:27.451245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-04-26 15:03:27.451274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-04-26 15:03:27.451490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-04-26 15:03:27.451750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-04-26 15:03:27.451779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-04-26 15:03:27.452153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-04-26 15:03:27.452522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-04-26 15:03:27.452549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-04-26 15:03:27.452919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-04-26 15:03:27.453302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-04-26 15:03:27.453330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-04-26 15:03:27.453585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-04-26 15:03:27.453963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-04-26 15:03:27.453991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-04-26 15:03:27.454394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-04-26 15:03:27.454607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-04-26 15:03:27.454633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-04-26 15:03:27.454988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-04-26 15:03:27.455362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-04-26 15:03:27.455389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-04-26 15:03:27.455845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-04-26 15:03:27.456070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.456097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.456335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.456716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.456743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.457150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.457391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.457417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.457810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.458164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.458192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.458599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.458829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.458868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.459294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.459513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.459539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.459962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.460204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.460230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.460471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.460866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.460894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.461314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.461687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.461713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.462082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.462452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.462482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.462876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.463120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.463150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.463247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.463575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.463603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.463824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.464194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.464221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.464487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.464877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.464908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.465275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.465646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.465673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.465898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.466294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.466322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.466713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.466964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.466992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.467449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.467693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.467719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.468099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.468450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.468476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.468888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.469285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.469312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.469699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.469915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.469942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.470295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.470646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.470674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.471087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.471455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.471482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.471700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.471953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.471982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.472374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.472749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.472777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.473146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.473509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.473541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.473974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.474230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.474259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.474506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.474757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-04-26 15:03:27.474784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-04-26 15:03:27.475189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.475539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.475566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-04-26 15:03:27.475769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.475942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.475970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-04-26 15:03:27.476256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.476491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.476518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-04-26 15:03:27.476906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.477270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.477300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-04-26 15:03:27.477708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.478024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.478052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-04-26 15:03:27.478433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.478712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.478738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-04-26 15:03:27.479114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.479482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.479509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-04-26 15:03:27.479772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.480118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.480153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-04-26 15:03:27.480540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.480913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.480941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-04-26 15:03:27.481321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.481540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.481567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-04-26 15:03:27.481958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.482334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.482362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-04-26 15:03:27.482462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.482826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.482864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-04-26 15:03:27.483269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.483520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.483547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-04-26 15:03:27.483924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.484342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.484369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-04-26 15:03:27.484599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.484975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.485004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-04-26 15:03:27.485266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.485635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.485662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-04-26 15:03:27.486142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.486555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.486582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-04-26 15:03:27.486957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.487328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.487368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-04-26 15:03:27.487763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.488065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.488093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-04-26 15:03:27.488474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.488869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.488899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-04-26 15:03:27.489136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.489385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.489413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-04-26 15:03:27.489861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.490076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.490102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-04-26 15:03:27.490425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.490797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.490823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-04-26 15:03:27.491224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.491582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.491611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-04-26 15:03:27.491977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.492347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.492374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-04-26 15:03:27.492753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.493082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.493112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-04-26 15:03:27.493501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.493711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.493737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-04-26 15:03:27.494124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.494341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-04-26 15:03:27.494373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.494603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.494863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.494892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.495157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.495539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.495565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.495935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.496177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.496205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.496571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.496938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.496965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.497197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.497600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.497626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.498020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.498370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.498398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.498671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.498876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.498903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.499308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.499682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.499709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.500079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.500313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.500339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.500700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.501091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.501119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.501348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.501586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.501615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.501865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.502260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.502287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.502646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.503110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.503137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.503460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.503677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.503703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.504138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.504518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.504546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.504931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.505292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.505320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.505690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.506032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.506061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.506440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.506857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.506885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.507239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.507614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.507642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.508019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.508380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.508406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.508814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.509211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.509239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.509605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.509808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.509834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.510078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.510485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.510512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.510890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.511129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.511156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.511530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.511764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.511794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.512178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.512536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.512564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.512967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.513350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.513377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.513738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.514079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.514107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-04-26 15:03:27.514340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-04-26 15:03:27.514625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.514651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.515038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.515399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.515426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.515796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.516031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.516058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.516428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.516779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.516805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.517228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.517621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.517648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.518107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.518329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.518355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.518595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.518894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.518922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.519313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.519668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.519695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.520089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.520460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.520488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.520864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.521227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.521254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.521605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.521980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.522008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.522382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.522818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.522856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.523117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.523502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.523528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.523909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.524287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.524314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.524546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.524753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.524779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.525059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.525391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.525418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.525806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.526156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.526186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.526552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.526931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.526959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.527327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.527536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.527562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.527939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.528192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.528219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.528600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.528818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.528853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.529127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.529493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.529522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.529907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.530153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.530180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.530568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.530915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.530942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.531348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.531734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.531760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.531993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.532211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.532240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.532473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.532654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.532681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.533056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.533285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.533311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-04-26 15:03:27.533714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-04-26 15:03:27.534087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.534116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-04-26 15:03:27.534496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.534867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.534895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-04-26 15:03:27.535285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.535666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.535693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-04-26 15:03:27.536069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.536331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.536357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-04-26 15:03:27.536779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.537129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.537157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-04-26 15:03:27.537390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.537725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.537752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-04-26 15:03:27.537979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.538226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.538255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-04-26 15:03:27.538684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.539034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.539063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-04-26 15:03:27.539297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.539708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.539735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-04-26 15:03:27.539961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.540251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.540278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-04-26 15:03:27.540571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.540961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.540990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-04-26 15:03:27.541375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.541727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.541761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-04-26 15:03:27.542183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.542425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.542451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-04-26 15:03:27.542858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.543198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.543224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-04-26 15:03:27.543623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.543974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.544003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-04-26 15:03:27.544407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.544756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.544783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-04-26 15:03:27.545186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.545531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.545558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-04-26 15:03:27.545828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.546198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.546226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-04-26 15:03:27.546325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.546411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.546436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-04-26 15:03:27.546818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.547165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.547194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-04-26 15:03:27.547546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.547767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.547797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-04-26 15:03:27.548031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.548385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.548413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-04-26 15:03:27.548767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.549004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.549033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-04-26 15:03:27.549427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.549780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-04-26 15:03:27.549806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.550076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.550474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.550502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.550878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.551289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.551316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.551663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.552012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.552040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.552303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.552637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.552664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.553036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.553428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.553456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.553689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.553974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.554002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.554278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.554459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.554486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.554852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.555145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.555171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.555391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.555767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.555794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.556112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.556434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.556461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.556877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.557102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.557129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.557374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.557664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.557691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.558040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.558405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.558431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.558779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.559170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.559198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.559561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.559939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.559966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.560208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.560553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.560581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.560973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.561304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.561331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.561714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.562058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.562086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.562328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.562663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.562691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.563035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.563284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.563310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.563659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.564031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.564060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.564373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.564584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.564610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.564977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.565308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.565336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.565724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.566050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.566080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.566308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.566687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.566716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.567112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.567443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.567470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.567800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.568177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.568205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.568584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.568967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.568996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.083 qpair failed and we were unable to recover it. 00:26:45.083 [2024-04-26 15:03:27.569391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.083 [2024-04-26 15:03:27.569771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.569799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.570142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.570488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.570515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.570706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.571045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.571075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.571446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.571806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.571833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.572213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.572551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.572578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.572976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.573087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.573116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.573462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.573775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.573801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.573991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.574399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.574426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.574687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.575077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.575105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.575419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.575768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.575795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.576222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.576463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.576490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.576594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.576875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.576905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.577252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.577538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.577566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.577785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.578163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.578192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.578443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.578824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.578863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.579254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.579610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.579637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.580006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.580115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.580145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.580532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.580867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.580896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.581241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.581614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.581641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.582083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.582487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.582515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.582856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.582947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.582975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.583356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.583638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.583665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.584043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.584409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.584441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.584805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.585196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.585225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.585468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.585726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.585756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.585986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.586194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.586221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.586632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.586855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.586883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.587263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.587615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.587645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.084 [2024-04-26 15:03:27.588018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.588277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.084 [2024-04-26 15:03:27.588304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.084 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.588678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.589039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.589069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.589453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.589804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.589831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.590219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.590647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.590673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.591056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.591245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.591283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.591603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.591958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.591986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.592347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.592697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.592725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.593102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.593492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.593520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.593798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.594177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.594205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.594429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.594667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.594695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.595076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.595295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.595326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.595687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.596038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.596067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.596310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.596573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.596603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.596877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.597091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.597119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.597365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.597759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.597792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.598202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.598546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.598575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.598960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.599307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.599335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.599697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.600051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.600079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.600447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.600799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.600827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.601208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.601559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.601586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.601965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.602195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.602222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.602590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.602835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.602874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.603127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.603340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.603367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.603782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.604008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.604038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.604282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.604639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.604673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.605060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.605152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.605178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.605561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.605766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.605794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.606082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.606461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.606490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.606879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.607268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.085 [2024-04-26 15:03:27.607294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.085 qpair failed and we were unable to recover it. 00:26:45.085 [2024-04-26 15:03:27.607665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.608006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.608035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.086 qpair failed and we were unable to recover it. 00:26:45.086 [2024-04-26 15:03:27.608422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.608785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.608812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.086 qpair failed and we were unable to recover it. 00:26:45.086 [2024-04-26 15:03:27.609191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.609555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.609582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.086 qpair failed and we were unable to recover it. 00:26:45.086 [2024-04-26 15:03:27.609961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.610317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.610345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.086 qpair failed and we were unable to recover it. 00:26:45.086 [2024-04-26 15:03:27.610537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.610907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.610935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.086 qpair failed and we were unable to recover it. 00:26:45.086 [2024-04-26 15:03:27.611324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.611708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.611735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.086 qpair failed and we were unable to recover it. 00:26:45.086 [2024-04-26 15:03:27.612113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.612466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.612494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.086 qpair failed and we were unable to recover it. 00:26:45.086 [2024-04-26 15:03:27.612714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.613061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.613089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.086 qpair failed and we were unable to recover it. 00:26:45.086 [2024-04-26 15:03:27.613476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.613860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.613890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.086 qpair failed and we were unable to recover it. 00:26:45.086 [2024-04-26 15:03:27.614263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.614480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.614506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.086 qpair failed and we were unable to recover it. 00:26:45.086 [2024-04-26 15:03:27.614874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.615276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.615303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.086 qpair failed and we were unable to recover it. 00:26:45.086 [2024-04-26 15:03:27.615526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.615814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.615852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.086 qpair failed and we were unable to recover it. 00:26:45.086 [2024-04-26 15:03:27.616273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.616516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.616546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.086 qpair failed and we were unable to recover it. 00:26:45.086 [2024-04-26 15:03:27.616829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.617101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.617128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.086 qpair failed and we were unable to recover it. 00:26:45.086 [2024-04-26 15:03:27.617508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.617913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.617943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.086 qpair failed and we were unable to recover it. 00:26:45.086 [2024-04-26 15:03:27.618173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.618475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.618502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.086 qpair failed and we were unable to recover it. 00:26:45.086 [2024-04-26 15:03:27.618868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.619243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.619271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.086 qpair failed and we were unable to recover it. 00:26:45.086 [2024-04-26 15:03:27.619503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.619885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.619912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.086 qpair failed and we were unable to recover it. 00:26:45.086 [2024-04-26 15:03:27.620288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.620495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.620521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.086 qpair failed and we were unable to recover it. 00:26:45.086 [2024-04-26 15:03:27.620925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.621336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.621364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.086 qpair failed and we were unable to recover it. 00:26:45.086 [2024-04-26 15:03:27.621582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.621804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.621831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.086 qpair failed and we were unable to recover it. 00:26:45.086 [2024-04-26 15:03:27.622247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.622617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.622643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.086 qpair failed and we were unable to recover it. 00:26:45.086 [2024-04-26 15:03:27.623037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.623430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.623457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.086 qpair failed and we were unable to recover it. 00:26:45.086 [2024-04-26 15:03:27.623850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.624119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.624149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.086 qpair failed and we were unable to recover it. 00:26:45.086 [2024-04-26 15:03:27.624377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.624767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.624793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.086 qpair failed and we were unable to recover it. 00:26:45.086 [2024-04-26 15:03:27.625191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.625534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.625561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.086 qpair failed and we were unable to recover it. 00:26:45.086 [2024-04-26 15:03:27.625961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.626379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.086 [2024-04-26 15:03:27.626406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.087 qpair failed and we were unable to recover it. 00:26:45.087 [2024-04-26 15:03:27.626578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.626834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.626872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.087 qpair failed and we were unable to recover it. 00:26:45.087 [2024-04-26 15:03:27.627229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.627484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.627510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.087 qpair failed and we were unable to recover it. 00:26:45.087 [2024-04-26 15:03:27.627754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.628055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.628084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.087 qpair failed and we were unable to recover it. 00:26:45.087 [2024-04-26 15:03:27.628332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.628728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.628755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.087 qpair failed and we were unable to recover it. 00:26:45.087 [2024-04-26 15:03:27.629197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.629405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.629432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.087 qpair failed and we were unable to recover it. 00:26:45.087 [2024-04-26 15:03:27.629819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.630174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.630202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.087 qpair failed and we were unable to recover it. 00:26:45.087 [2024-04-26 15:03:27.630573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.630940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.630967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.087 qpair failed and we were unable to recover it. 00:26:45.087 [2024-04-26 15:03:27.631357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.631605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.631634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.087 qpair failed and we were unable to recover it. 00:26:45.087 [2024-04-26 15:03:27.632060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.632151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.632176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.087 qpair failed and we were unable to recover it. 00:26:45.087 [2024-04-26 15:03:27.632429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.632796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.632823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.087 qpair failed and we were unable to recover it. 00:26:45.087 [2024-04-26 15:03:27.632950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.633311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.633338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.087 qpair failed and we were unable to recover it. 00:26:45.087 [2024-04-26 15:03:27.633719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.634091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.634120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.087 qpair failed and we were unable to recover it. 00:26:45.087 [2024-04-26 15:03:27.634501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.634766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.634793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.087 qpair failed and we were unable to recover it. 00:26:45.087 [2024-04-26 15:03:27.635184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.635412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.635438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.087 qpair failed and we were unable to recover it. 00:26:45.087 [2024-04-26 15:03:27.635891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.636240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.636267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.087 qpair failed and we were unable to recover it. 00:26:45.087 [2024-04-26 15:03:27.636716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.637098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.637126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.087 qpair failed and we were unable to recover it. 00:26:45.087 [2024-04-26 15:03:27.637473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.637719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.637745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.087 qpair failed and we were unable to recover it. 00:26:45.087 [2024-04-26 15:03:27.637992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.638230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.638257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.087 qpair failed and we were unable to recover it. 00:26:45.087 [2024-04-26 15:03:27.638631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.639006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.639034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.087 qpair failed and we were unable to recover it. 00:26:45.087 [2024-04-26 15:03:27.639429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.639649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.639675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.087 qpair failed and we were unable to recover it. 00:26:45.087 [2024-04-26 15:03:27.639955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.640357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.640384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.087 qpair failed and we were unable to recover it. 00:26:45.087 [2024-04-26 15:03:27.640621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.640985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.641013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.087 qpair failed and we were unable to recover it. 00:26:45.087 [2024-04-26 15:03:27.641390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.641760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.641798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.087 qpair failed and we were unable to recover it. 00:26:45.087 [2024-04-26 15:03:27.642193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.642560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.642586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.087 qpair failed and we were unable to recover it. 00:26:45.087 [2024-04-26 15:03:27.642969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.643209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.643236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.087 qpair failed and we were unable to recover it. 00:26:45.087 [2024-04-26 15:03:27.643610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.087 [2024-04-26 15:03:27.644063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.644091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.088 qpair failed and we were unable to recover it. 00:26:45.088 [2024-04-26 15:03:27.644361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.644621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.644648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.088 qpair failed and we were unable to recover it. 00:26:45.088 [2024-04-26 15:03:27.644962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.645322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.645349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.088 qpair failed and we were unable to recover it. 00:26:45.088 [2024-04-26 15:03:27.645722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.645937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.645965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.088 qpair failed and we were unable to recover it. 00:26:45.088 [2024-04-26 15:03:27.646458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.646676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.646703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.088 qpair failed and we were unable to recover it. 00:26:45.088 [2024-04-26 15:03:27.646921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.647289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.647316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.088 qpair failed and we were unable to recover it. 00:26:45.088 [2024-04-26 15:03:27.647577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.647712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.647742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.088 qpair failed and we were unable to recover it. 00:26:45.088 [2024-04-26 15:03:27.648167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.648541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.648570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.088 qpair failed and we were unable to recover it. 00:26:45.088 [2024-04-26 15:03:27.648942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 15:03:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:45.088 [2024-04-26 15:03:27.649338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.649365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.088 qpair failed and we were unable to recover it. 00:26:45.088 15:03:27 -- common/autotest_common.sh@850 -- # return 0 00:26:45.088 [2024-04-26 15:03:27.649751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 15:03:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:45.088 [2024-04-26 15:03:27.650091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.650119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.088 qpair failed and we were unable to recover it. 00:26:45.088 15:03:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:45.088 15:03:27 -- common/autotest_common.sh@10 -- # set +x 00:26:45.088 [2024-04-26 15:03:27.650406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.650810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.650845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.088 qpair failed and we were unable to recover it. 00:26:45.088 [2024-04-26 15:03:27.651255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.651726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.651756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.088 qpair failed and we were unable to recover it. 00:26:45.088 [2024-04-26 15:03:27.652035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.652417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.652445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.088 qpair failed and we were unable to recover it. 00:26:45.088 [2024-04-26 15:03:27.652670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.653086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.653115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.088 qpair failed and we were unable to recover it. 00:26:45.088 [2024-04-26 15:03:27.653390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.653747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.653774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.088 qpair failed and we were unable to recover it. 00:26:45.088 [2024-04-26 15:03:27.654041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.654303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.654333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.088 qpair failed and we were unable to recover it. 00:26:45.088 [2024-04-26 15:03:27.654705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.654994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.655023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.088 qpair failed and we were unable to recover it. 00:26:45.088 [2024-04-26 15:03:27.655402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.655759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.655787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.088 qpair failed and we were unable to recover it. 00:26:45.088 [2024-04-26 15:03:27.656169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.656516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.656545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.088 qpair failed and we were unable to recover it. 00:26:45.088 [2024-04-26 15:03:27.656924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.657302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.657331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.088 qpair failed and we were unable to recover it. 00:26:45.088 [2024-04-26 15:03:27.657785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.658141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.658169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.088 qpair failed and we were unable to recover it. 00:26:45.088 [2024-04-26 15:03:27.658269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.658515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.658541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.088 qpair failed and we were unable to recover it. 00:26:45.088 [2024-04-26 15:03:27.658887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.659278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.659305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.088 qpair failed and we were unable to recover it. 00:26:45.088 [2024-04-26 15:03:27.659690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.659909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.659943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.088 qpair failed and we were unable to recover it. 00:26:45.088 [2024-04-26 15:03:27.660332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.660688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.660717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.088 qpair failed and we were unable to recover it. 00:26:45.088 [2024-04-26 15:03:27.661153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.661367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.661394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.088 qpair failed and we were unable to recover it. 00:26:45.088 [2024-04-26 15:03:27.661802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.662172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.662203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.088 qpair failed and we were unable to recover it. 00:26:45.088 [2024-04-26 15:03:27.662584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.088 [2024-04-26 15:03:27.663064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.663091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.663449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.663675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.663702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.664167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.664417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.664446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.664712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.665061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.665091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.665347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.665698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.665725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.665996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.666219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.666246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.666642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.666893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.666927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.667311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.667522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.667548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.667949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.668148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.668175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.668591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.669020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.669049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.669415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.669522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.669549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.669938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.670303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.670330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.670542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.670863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.670893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.671204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.671541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.671569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.671961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.672205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.672232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.672478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.672700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.672728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.672961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.673195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.673228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.673612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.673976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.674004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.674376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.674748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.674775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.675139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.675366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.675394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.675497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.675633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.675660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.676027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.676394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.676421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.676792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.677064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.677093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.677476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.677865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.677894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.678284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.678643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.678670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.678889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.679283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.679313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.679607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.679860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.679894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.680119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.680206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.680232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.089 qpair failed and we were unable to recover it. 00:26:45.089 [2024-04-26 15:03:27.680481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.680734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.089 [2024-04-26 15:03:27.680762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.090 [2024-04-26 15:03:27.681142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.681499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.681525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.090 [2024-04-26 15:03:27.681777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.682155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.682184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.090 [2024-04-26 15:03:27.682560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.682935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.682966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.090 [2024-04-26 15:03:27.683349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.683704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.683732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.090 [2024-04-26 15:03:27.684102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.684347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.684374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.090 [2024-04-26 15:03:27.684721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.685093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.685122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.090 [2024-04-26 15:03:27.685486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.685851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.685880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.090 [2024-04-26 15:03:27.686235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.686604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.686631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.090 [2024-04-26 15:03:27.687000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.687222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.687249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.090 [2024-04-26 15:03:27.687604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.687975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.688004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.090 [2024-04-26 15:03:27.688359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.688741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.688769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.090 [2024-04-26 15:03:27.689153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.689515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.689541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.090 [2024-04-26 15:03:27.689765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.690133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.690161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.090 [2024-04-26 15:03:27.690586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.690823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.690865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.090 [2024-04-26 15:03:27.691251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.691544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.691572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.090 [2024-04-26 15:03:27.691800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.692042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.692070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.090 [2024-04-26 15:03:27.692429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.692542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.692573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.090 15:03:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:45.090 [2024-04-26 15:03:27.692917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 15:03:27 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:45.090 [2024-04-26 15:03:27.693303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.693333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 15:03:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.090 15:03:27 -- common/autotest_common.sh@10 -- # set +x 00:26:45.090 [2024-04-26 15:03:27.693717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.694072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.694100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.090 [2024-04-26 15:03:27.694200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.694609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.694636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.090 [2024-04-26 15:03:27.694887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.695230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.695258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.090 [2024-04-26 15:03:27.695641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.696022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.696050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.090 [2024-04-26 15:03:27.696436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.696807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.696834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.090 [2024-04-26 15:03:27.697191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.697496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.697523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.090 [2024-04-26 15:03:27.697908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.698287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.698314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.090 [2024-04-26 15:03:27.698723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.699100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.699128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.090 [2024-04-26 15:03:27.699491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.699861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.090 [2024-04-26 15:03:27.699889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.090 qpair failed and we were unable to recover it. 00:26:45.091 [2024-04-26 15:03:27.700159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.700524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.700551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.091 qpair failed and we were unable to recover it. 00:26:45.091 [2024-04-26 15:03:27.700864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.701131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.701158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.091 qpair failed and we were unable to recover it. 00:26:45.091 [2024-04-26 15:03:27.701399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.701741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.701767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.091 qpair failed and we were unable to recover it. 00:26:45.091 [2024-04-26 15:03:27.702122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.702498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.702524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.091 qpair failed and we were unable to recover it. 00:26:45.091 [2024-04-26 15:03:27.702896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.703258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.703286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.091 qpair failed and we were unable to recover it. 00:26:45.091 [2024-04-26 15:03:27.703546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.703758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.703784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.091 qpair failed and we were unable to recover it. 00:26:45.091 [2024-04-26 15:03:27.704233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.704610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.704637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.091 qpair failed and we were unable to recover it. 00:26:45.091 [2024-04-26 15:03:27.705003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.705319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.705345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.091 qpair failed and we were unable to recover it. 00:26:45.091 [2024-04-26 15:03:27.705748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.706123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.706152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.091 qpair failed and we were unable to recover it. 00:26:45.091 [2024-04-26 15:03:27.706542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.706899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.706927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.091 qpair failed and we were unable to recover it. 00:26:45.091 [2024-04-26 15:03:27.707330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.707731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.707759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.091 qpair failed and we were unable to recover it. 00:26:45.091 [2024-04-26 15:03:27.708119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.708365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.708391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.091 qpair failed and we were unable to recover it. 00:26:45.091 [2024-04-26 15:03:27.708652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.709072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.709100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.091 qpair failed and we were unable to recover it. 00:26:45.091 [2024-04-26 15:03:27.709459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.709671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.709698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.091 qpair failed and we were unable to recover it. 00:26:45.091 [2024-04-26 15:03:27.710078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.710443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.710469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.091 qpair failed and we were unable to recover it. 00:26:45.091 [2024-04-26 15:03:27.710851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.711252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.711279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.091 qpair failed and we were unable to recover it. 00:26:45.091 [2024-04-26 15:03:27.711544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.711960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.711988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.091 qpair failed and we were unable to recover it. 00:26:45.091 [2024-04-26 15:03:27.712364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.712574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.712601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.091 qpair failed and we were unable to recover it. 00:26:45.091 [2024-04-26 15:03:27.712988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.713368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.713396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.091 qpair failed and we were unable to recover it. 00:26:45.091 [2024-04-26 15:03:27.713773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.714120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.714149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.091 qpair failed and we were unable to recover it. 00:26:45.091 [2024-04-26 15:03:27.714475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.714858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.714886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.091 qpair failed and we were unable to recover it. 00:26:45.091 [2024-04-26 15:03:27.715267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.715589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.715615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.091 qpair failed and we were unable to recover it. 00:26:45.091 [2024-04-26 15:03:27.715872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.716222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.716248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.091 qpair failed and we were unable to recover it. 00:26:45.091 [2024-04-26 15:03:27.716486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.716883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.716911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.091 qpair failed and we were unable to recover it. 00:26:45.091 [2024-04-26 15:03:27.717298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.091 [2024-04-26 15:03:27.717554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.717583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.092 qpair failed and we were unable to recover it. 00:26:45.092 Malloc0 00:26:45.092 [2024-04-26 15:03:27.718023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.718274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.718301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.092 qpair failed and we were unable to recover it. 00:26:45.092 15:03:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.092 [2024-04-26 15:03:27.718674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 15:03:27 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:45.092 [2024-04-26 15:03:27.718929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.718956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.092 qpair failed and we were unable to recover it. 00:26:45.092 15:03:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.092 [2024-04-26 15:03:27.719199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 15:03:27 -- common/autotest_common.sh@10 -- # set +x 00:26:45.092 [2024-04-26 15:03:27.719569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.719597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.092 qpair failed and we were unable to recover it. 00:26:45.092 [2024-04-26 15:03:27.720027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.720433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.720462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.092 qpair failed and we were unable to recover it. 00:26:45.092 [2024-04-26 15:03:27.720857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.721111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.721144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.092 qpair failed and we were unable to recover it. 00:26:45.092 [2024-04-26 15:03:27.721243] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:45.092 [2024-04-26 15:03:27.721527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.721872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.721900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.092 qpair failed and we were unable to recover it. 00:26:45.092 [2024-04-26 15:03:27.722267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.722679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.722705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.092 qpair failed and we were unable to recover it. 00:26:45.092 [2024-04-26 15:03:27.723105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.723333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.723361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.092 qpair failed and we were unable to recover it. 00:26:45.092 [2024-04-26 15:03:27.723725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.724090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.724118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.092 qpair failed and we were unable to recover it. 00:26:45.092 [2024-04-26 15:03:27.724545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.724927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.724955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.092 qpair failed and we were unable to recover it. 00:26:45.092 [2024-04-26 15:03:27.725333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.725705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.725732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.092 qpair failed and we were unable to recover it. 00:26:45.092 [2024-04-26 15:03:27.726077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.726514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.726542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.092 qpair failed and we were unable to recover it. 00:26:45.092 [2024-04-26 15:03:27.726897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.727153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.727179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.092 qpair failed and we were unable to recover it. 00:26:45.092 [2024-04-26 15:03:27.727587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.727932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.727959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.092 qpair failed and we were unable to recover it. 00:26:45.092 [2024-04-26 15:03:27.728344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.728716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.728743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.092 qpair failed and we were unable to recover it. 00:26:45.092 [2024-04-26 15:03:27.729010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.729355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.729382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.092 qpair failed and we were unable to recover it. 00:26:45.092 [2024-04-26 15:03:27.729734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 15:03:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.092 [2024-04-26 15:03:27.730115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.730148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.092 qpair failed and we were unable to recover it. 00:26:45.092 [2024-04-26 15:03:27.730463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 15:03:27 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:45.092 [2024-04-26 15:03:27.730684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.730710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.092 qpair failed and we were unable to recover it. 00:26:45.092 15:03:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.092 [2024-04-26 15:03:27.730936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 15:03:27 -- common/autotest_common.sh@10 -- # set +x 00:26:45.092 [2024-04-26 15:03:27.731153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.731179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.092 qpair failed and we were unable to recover it. 00:26:45.092 [2024-04-26 15:03:27.731596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.731813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.731848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.092 qpair failed and we were unable to recover it. 00:26:45.092 [2024-04-26 15:03:27.732095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.732313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.732340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.092 qpair failed and we were unable to recover it. 00:26:45.092 [2024-04-26 15:03:27.732720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.732960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.732991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.092 qpair failed and we were unable to recover it. 00:26:45.092 [2024-04-26 15:03:27.733360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.733770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.733796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.092 qpair failed and we were unable to recover it. 00:26:45.092 [2024-04-26 15:03:27.734088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.734455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.734482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.092 qpair failed and we were unable to recover it. 00:26:45.092 [2024-04-26 15:03:27.734865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.735244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.092 [2024-04-26 15:03:27.735272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.092 qpair failed and we were unable to recover it. 00:26:45.355 [2024-04-26 15:03:27.735665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.736033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.736064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-04-26 15:03:27.736310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.736679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.736706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-04-26 15:03:27.737121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.737445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.737471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-04-26 15:03:27.737834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 15:03:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.356 [2024-04-26 15:03:27.738081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.738111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-04-26 15:03:27.738269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 15:03:27 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:45.356 [2024-04-26 15:03:27.738680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.738706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 15:03:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.356 [2024-04-26 15:03:27.738935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 15:03:27 -- common/autotest_common.sh@10 -- # set +x 00:26:45.356 [2024-04-26 15:03:27.739345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.739371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-04-26 15:03:27.739730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.740103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.740130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-04-26 15:03:27.740508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.740888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.740917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-04-26 15:03:27.741350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.741562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.741588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-04-26 15:03:27.741965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.742360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.742387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-04-26 15:03:27.742754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.743005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.743033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-04-26 15:03:27.743240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.743482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.743511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-04-26 15:03:27.743754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.744100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.744128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-04-26 15:03:27.744515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.744883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.744911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-04-26 15:03:27.745278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.745650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.745677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-04-26 15:03:27.746064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 15:03:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.356 [2024-04-26 15:03:27.746288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.746316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 15:03:27 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:45.356 [2024-04-26 15:03:27.746665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 15:03:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.356 15:03:27 -- common/autotest_common.sh@10 -- # set +x 00:26:45.356 [2024-04-26 15:03:27.747066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.747094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-04-26 15:03:27.747542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.747756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.747783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-04-26 15:03:27.747998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.748344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.748371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-04-26 15:03:27.748689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.749073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.749101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0670000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-04-26 15:03:27.749472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-04-26 15:03:27.749608] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.356 15:03:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.356 15:03:27 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:45.356 15:03:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.356 15:03:27 -- common/autotest_common.sh@10 -- # set +x 00:26:45.356 [2024-04-26 15:03:27.756285] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:26:45.356 [2024-04-26 15:03:27.756400] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f0670000b90 (107): Transport endpoint is not connected 00:26:45.356 [2024-04-26 15:03:27.756511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-04-26 15:03:27.761991] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.356 15:03:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.356 [2024-04-26 15:03:27.762154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.356 [2024-04-26 15:03:27.762200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.356 [2024-04-26 15:03:27.762221] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.356 [2024-04-26 15:03:27.762238] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.357 [2024-04-26 15:03:27.762281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 15:03:27 -- host/target_disconnect.sh@58 -- # wait 1231138 00:26:45.357 [2024-04-26 15:03:27.771986] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.357 [2024-04-26 15:03:27.772116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.357 [2024-04-26 15:03:27.772149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.357 [2024-04-26 15:03:27.772162] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.357 [2024-04-26 15:03:27.772174] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.357 [2024-04-26 15:03:27.772206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-04-26 15:03:27.781958] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.357 [2024-04-26 15:03:27.782045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.357 [2024-04-26 15:03:27.782070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.357 [2024-04-26 15:03:27.782079] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.357 [2024-04-26 15:03:27.782087] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.357 [2024-04-26 15:03:27.782108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-04-26 15:03:27.791950] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.357 [2024-04-26 15:03:27.792056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.357 [2024-04-26 15:03:27.792079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.357 [2024-04-26 15:03:27.792089] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.357 [2024-04-26 15:03:27.792098] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.357 [2024-04-26 15:03:27.792118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-04-26 15:03:27.801950] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.357 [2024-04-26 15:03:27.802031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.357 [2024-04-26 15:03:27.802051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.357 [2024-04-26 15:03:27.802058] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.357 [2024-04-26 15:03:27.802065] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.357 [2024-04-26 15:03:27.802082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-04-26 15:03:27.812015] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.357 [2024-04-26 15:03:27.812094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.357 [2024-04-26 15:03:27.812115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.357 [2024-04-26 15:03:27.812123] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.357 [2024-04-26 15:03:27.812129] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.357 [2024-04-26 15:03:27.812147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-04-26 15:03:27.821963] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.357 [2024-04-26 15:03:27.822031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.357 [2024-04-26 15:03:27.822052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.357 [2024-04-26 15:03:27.822063] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.357 [2024-04-26 15:03:27.822069] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.357 [2024-04-26 15:03:27.822087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-04-26 15:03:27.831997] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.357 [2024-04-26 15:03:27.832087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.357 [2024-04-26 15:03:27.832107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.357 [2024-04-26 15:03:27.832115] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.357 [2024-04-26 15:03:27.832121] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.357 [2024-04-26 15:03:27.832139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-04-26 15:03:27.841964] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.357 [2024-04-26 15:03:27.842029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.357 [2024-04-26 15:03:27.842050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.357 [2024-04-26 15:03:27.842058] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.357 [2024-04-26 15:03:27.842064] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.357 [2024-04-26 15:03:27.842082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-04-26 15:03:27.852063] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.357 [2024-04-26 15:03:27.852129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.357 [2024-04-26 15:03:27.852149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.357 [2024-04-26 15:03:27.852157] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.357 [2024-04-26 15:03:27.852163] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.357 [2024-04-26 15:03:27.852180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-04-26 15:03:27.862097] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.357 [2024-04-26 15:03:27.862164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.357 [2024-04-26 15:03:27.862185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.357 [2024-04-26 15:03:27.862192] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.357 [2024-04-26 15:03:27.862199] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.357 [2024-04-26 15:03:27.862215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-04-26 15:03:27.872124] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.357 [2024-04-26 15:03:27.872223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.357 [2024-04-26 15:03:27.872243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.357 [2024-04-26 15:03:27.872252] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.357 [2024-04-26 15:03:27.872258] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.357 [2024-04-26 15:03:27.872275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-04-26 15:03:27.882186] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.357 [2024-04-26 15:03:27.882257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.357 [2024-04-26 15:03:27.882276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.357 [2024-04-26 15:03:27.882283] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.357 [2024-04-26 15:03:27.882289] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.358 [2024-04-26 15:03:27.882305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-04-26 15:03:27.892227] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.358 [2024-04-26 15:03:27.892295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.358 [2024-04-26 15:03:27.892320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.358 [2024-04-26 15:03:27.892330] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.358 [2024-04-26 15:03:27.892337] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.358 [2024-04-26 15:03:27.892354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-04-26 15:03:27.902220] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.358 [2024-04-26 15:03:27.902291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.358 [2024-04-26 15:03:27.902313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.358 [2024-04-26 15:03:27.902320] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.358 [2024-04-26 15:03:27.902327] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.358 [2024-04-26 15:03:27.902344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-04-26 15:03:27.912256] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.358 [2024-04-26 15:03:27.912339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.358 [2024-04-26 15:03:27.912365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.358 [2024-04-26 15:03:27.912372] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.358 [2024-04-26 15:03:27.912378] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.358 [2024-04-26 15:03:27.912395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-04-26 15:03:27.922197] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.358 [2024-04-26 15:03:27.922259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.358 [2024-04-26 15:03:27.922278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.358 [2024-04-26 15:03:27.922286] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.358 [2024-04-26 15:03:27.922292] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.358 [2024-04-26 15:03:27.922307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-04-26 15:03:27.932335] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.358 [2024-04-26 15:03:27.932408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.358 [2024-04-26 15:03:27.932427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.358 [2024-04-26 15:03:27.932435] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.358 [2024-04-26 15:03:27.932441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.358 [2024-04-26 15:03:27.932458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-04-26 15:03:27.942296] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.358 [2024-04-26 15:03:27.942363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.358 [2024-04-26 15:03:27.942383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.358 [2024-04-26 15:03:27.942391] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.358 [2024-04-26 15:03:27.942397] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.358 [2024-04-26 15:03:27.942414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-04-26 15:03:27.952398] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.358 [2024-04-26 15:03:27.952476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.358 [2024-04-26 15:03:27.952495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.358 [2024-04-26 15:03:27.952502] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.358 [2024-04-26 15:03:27.952509] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.358 [2024-04-26 15:03:27.952535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-04-26 15:03:27.962421] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.358 [2024-04-26 15:03:27.962536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.358 [2024-04-26 15:03:27.962562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.358 [2024-04-26 15:03:27.962570] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.358 [2024-04-26 15:03:27.962577] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.358 [2024-04-26 15:03:27.962595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-04-26 15:03:27.972477] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.358 [2024-04-26 15:03:27.972548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.358 [2024-04-26 15:03:27.972569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.358 [2024-04-26 15:03:27.972576] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.358 [2024-04-26 15:03:27.972583] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.358 [2024-04-26 15:03:27.972599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-04-26 15:03:27.982369] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.358 [2024-04-26 15:03:27.982449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.358 [2024-04-26 15:03:27.982469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.358 [2024-04-26 15:03:27.982476] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.358 [2024-04-26 15:03:27.982482] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.358 [2024-04-26 15:03:27.982498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-04-26 15:03:27.992523] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.358 [2024-04-26 15:03:27.992606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.358 [2024-04-26 15:03:27.992641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.358 [2024-04-26 15:03:27.992649] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.358 [2024-04-26 15:03:27.992656] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.358 [2024-04-26 15:03:27.992678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-04-26 15:03:28.002557] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.358 [2024-04-26 15:03:28.002629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.358 [2024-04-26 15:03:28.002669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.358 [2024-04-26 15:03:28.002679] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.358 [2024-04-26 15:03:28.002686] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.358 [2024-04-26 15:03:28.002708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-04-26 15:03:28.012601] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.358 [2024-04-26 15:03:28.012677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.358 [2024-04-26 15:03:28.012700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.358 [2024-04-26 15:03:28.012707] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.358 [2024-04-26 15:03:28.012714] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.359 [2024-04-26 15:03:28.012732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.648 [2024-04-26 15:03:28.022613] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.648 [2024-04-26 15:03:28.022724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.648 [2024-04-26 15:03:28.022744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.648 [2024-04-26 15:03:28.022752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.648 [2024-04-26 15:03:28.022759] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.648 [2024-04-26 15:03:28.022777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.648 qpair failed and we were unable to recover it. 00:26:45.648 [2024-04-26 15:03:28.032659] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.648 [2024-04-26 15:03:28.032741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.648 [2024-04-26 15:03:28.032761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.648 [2024-04-26 15:03:28.032769] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.648 [2024-04-26 15:03:28.032777] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.648 [2024-04-26 15:03:28.032794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.648 qpair failed and we were unable to recover it. 00:26:45.648 [2024-04-26 15:03:28.042604] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.648 [2024-04-26 15:03:28.042667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.648 [2024-04-26 15:03:28.042688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.648 [2024-04-26 15:03:28.042695] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.648 [2024-04-26 15:03:28.042701] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.648 [2024-04-26 15:03:28.042724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.648 qpair failed and we were unable to recover it. 00:26:45.648 [2024-04-26 15:03:28.052868] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.649 [2024-04-26 15:03:28.052949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.649 [2024-04-26 15:03:28.052969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.649 [2024-04-26 15:03:28.052976] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.649 [2024-04-26 15:03:28.052982] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.649 [2024-04-26 15:03:28.052999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.649 qpair failed and we were unable to recover it. 00:26:45.649 [2024-04-26 15:03:28.062637] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.649 [2024-04-26 15:03:28.062705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.649 [2024-04-26 15:03:28.062727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.649 [2024-04-26 15:03:28.062737] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.649 [2024-04-26 15:03:28.062744] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.649 [2024-04-26 15:03:28.062762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.649 qpair failed and we were unable to recover it. 00:26:45.649 [2024-04-26 15:03:28.072861] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.649 [2024-04-26 15:03:28.072940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.649 [2024-04-26 15:03:28.072961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.649 [2024-04-26 15:03:28.072968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.649 [2024-04-26 15:03:28.072974] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.649 [2024-04-26 15:03:28.072992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.649 qpair failed and we were unable to recover it. 00:26:45.649 [2024-04-26 15:03:28.082880] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.649 [2024-04-26 15:03:28.082953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.649 [2024-04-26 15:03:28.082972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.649 [2024-04-26 15:03:28.082979] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.649 [2024-04-26 15:03:28.082985] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.649 [2024-04-26 15:03:28.083001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.649 qpair failed and we were unable to recover it. 00:26:45.649 [2024-04-26 15:03:28.092849] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.649 [2024-04-26 15:03:28.092923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.649 [2024-04-26 15:03:28.092942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.649 [2024-04-26 15:03:28.092949] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.649 [2024-04-26 15:03:28.092956] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.649 [2024-04-26 15:03:28.092972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.649 qpair failed and we were unable to recover it. 00:26:45.649 [2024-04-26 15:03:28.102813] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.649 [2024-04-26 15:03:28.102879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.649 [2024-04-26 15:03:28.102899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.649 [2024-04-26 15:03:28.102906] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.649 [2024-04-26 15:03:28.102913] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.649 [2024-04-26 15:03:28.102929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.649 qpair failed and we were unable to recover it. 00:26:45.649 [2024-04-26 15:03:28.112899] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.649 [2024-04-26 15:03:28.112971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.649 [2024-04-26 15:03:28.112990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.649 [2024-04-26 15:03:28.112997] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.649 [2024-04-26 15:03:28.113003] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.649 [2024-04-26 15:03:28.113019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.649 qpair failed and we were unable to recover it. 00:26:45.649 [2024-04-26 15:03:28.122920] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.649 [2024-04-26 15:03:28.122997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.649 [2024-04-26 15:03:28.123017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.649 [2024-04-26 15:03:28.123024] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.649 [2024-04-26 15:03:28.123030] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.649 [2024-04-26 15:03:28.123047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.649 qpair failed and we were unable to recover it. 00:26:45.649 [2024-04-26 15:03:28.132825] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.649 [2024-04-26 15:03:28.132906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.649 [2024-04-26 15:03:28.132927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.649 [2024-04-26 15:03:28.132934] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.649 [2024-04-26 15:03:28.132946] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.649 [2024-04-26 15:03:28.132964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.649 qpair failed and we were unable to recover it. 00:26:45.649 [2024-04-26 15:03:28.142951] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.649 [2024-04-26 15:03:28.143025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.649 [2024-04-26 15:03:28.143047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.649 [2024-04-26 15:03:28.143054] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.649 [2024-04-26 15:03:28.143060] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.649 [2024-04-26 15:03:28.143078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.649 qpair failed and we were unable to recover it. 00:26:45.649 [2024-04-26 15:03:28.153017] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.649 [2024-04-26 15:03:28.153095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.649 [2024-04-26 15:03:28.153115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.649 [2024-04-26 15:03:28.153122] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.649 [2024-04-26 15:03:28.153128] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.649 [2024-04-26 15:03:28.153144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.649 qpair failed and we were unable to recover it. 00:26:45.649 [2024-04-26 15:03:28.163023] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.649 [2024-04-26 15:03:28.163086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.650 [2024-04-26 15:03:28.163105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.650 [2024-04-26 15:03:28.163112] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.650 [2024-04-26 15:03:28.163119] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.650 [2024-04-26 15:03:28.163136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.650 qpair failed and we were unable to recover it. 00:26:45.650 [2024-04-26 15:03:28.173076] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.650 [2024-04-26 15:03:28.173143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.650 [2024-04-26 15:03:28.173162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.650 [2024-04-26 15:03:28.173169] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.650 [2024-04-26 15:03:28.173175] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.650 [2024-04-26 15:03:28.173191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.650 qpair failed and we were unable to recover it. 00:26:45.650 [2024-04-26 15:03:28.183065] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.650 [2024-04-26 15:03:28.183130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.650 [2024-04-26 15:03:28.183150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.650 [2024-04-26 15:03:28.183157] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.650 [2024-04-26 15:03:28.183164] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.650 [2024-04-26 15:03:28.183180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.650 qpair failed and we were unable to recover it. 00:26:45.650 [2024-04-26 15:03:28.193125] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.650 [2024-04-26 15:03:28.193231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.650 [2024-04-26 15:03:28.193251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.650 [2024-04-26 15:03:28.193259] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.650 [2024-04-26 15:03:28.193266] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.650 [2024-04-26 15:03:28.193282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.650 qpair failed and we were unable to recover it. 00:26:45.650 [2024-04-26 15:03:28.203134] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.650 [2024-04-26 15:03:28.203201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.650 [2024-04-26 15:03:28.203220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.650 [2024-04-26 15:03:28.203227] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.650 [2024-04-26 15:03:28.203234] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.650 [2024-04-26 15:03:28.203251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.650 qpair failed and we were unable to recover it. 00:26:45.650 [2024-04-26 15:03:28.213166] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.650 [2024-04-26 15:03:28.213252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.650 [2024-04-26 15:03:28.213270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.650 [2024-04-26 15:03:28.213278] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.650 [2024-04-26 15:03:28.213284] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.650 [2024-04-26 15:03:28.213300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.650 qpair failed and we were unable to recover it. 00:26:45.650 [2024-04-26 15:03:28.223205] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.650 [2024-04-26 15:03:28.223267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.650 [2024-04-26 15:03:28.223286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.650 [2024-04-26 15:03:28.223299] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.650 [2024-04-26 15:03:28.223305] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.650 [2024-04-26 15:03:28.223321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.650 qpair failed and we were unable to recover it. 00:26:45.650 [2024-04-26 15:03:28.233244] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.650 [2024-04-26 15:03:28.233359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.650 [2024-04-26 15:03:28.233379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.650 [2024-04-26 15:03:28.233386] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.650 [2024-04-26 15:03:28.233393] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.650 [2024-04-26 15:03:28.233410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.650 qpair failed and we were unable to recover it. 00:26:45.650 [2024-04-26 15:03:28.243261] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.650 [2024-04-26 15:03:28.243330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.650 [2024-04-26 15:03:28.243349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.650 [2024-04-26 15:03:28.243357] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.650 [2024-04-26 15:03:28.243363] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.650 [2024-04-26 15:03:28.243381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.650 qpair failed and we were unable to recover it. 00:26:45.650 [2024-04-26 15:03:28.253300] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.650 [2024-04-26 15:03:28.253357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.650 [2024-04-26 15:03:28.253377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.650 [2024-04-26 15:03:28.253384] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.650 [2024-04-26 15:03:28.253390] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.650 [2024-04-26 15:03:28.253406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.650 qpair failed and we were unable to recover it. 00:26:45.650 [2024-04-26 15:03:28.263344] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.650 [2024-04-26 15:03:28.263410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.650 [2024-04-26 15:03:28.263429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.650 [2024-04-26 15:03:28.263437] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.650 [2024-04-26 15:03:28.263443] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.651 [2024-04-26 15:03:28.263461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.651 qpair failed and we were unable to recover it. 00:26:45.651 [2024-04-26 15:03:28.273375] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.651 [2024-04-26 15:03:28.273453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.651 [2024-04-26 15:03:28.273472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.651 [2024-04-26 15:03:28.273480] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.651 [2024-04-26 15:03:28.273486] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.651 [2024-04-26 15:03:28.273502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.651 qpair failed and we were unable to recover it. 00:26:45.651 [2024-04-26 15:03:28.283395] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.651 [2024-04-26 15:03:28.283475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.651 [2024-04-26 15:03:28.283494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.651 [2024-04-26 15:03:28.283501] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.651 [2024-04-26 15:03:28.283507] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.651 [2024-04-26 15:03:28.283523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.651 qpair failed and we were unable to recover it. 00:26:45.651 [2024-04-26 15:03:28.293426] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.651 [2024-04-26 15:03:28.293491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.651 [2024-04-26 15:03:28.293510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.651 [2024-04-26 15:03:28.293518] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.651 [2024-04-26 15:03:28.293524] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.651 [2024-04-26 15:03:28.293541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.651 qpair failed and we were unable to recover it. 00:26:45.651 [2024-04-26 15:03:28.303469] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.651 [2024-04-26 15:03:28.303536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.651 [2024-04-26 15:03:28.303555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.651 [2024-04-26 15:03:28.303563] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.651 [2024-04-26 15:03:28.303569] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.651 [2024-04-26 15:03:28.303586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.651 qpair failed and we were unable to recover it. 00:26:45.914 [2024-04-26 15:03:28.313513] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.914 [2024-04-26 15:03:28.313596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.914 [2024-04-26 15:03:28.313623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.914 [2024-04-26 15:03:28.313630] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.914 [2024-04-26 15:03:28.313636] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.914 [2024-04-26 15:03:28.313653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.914 qpair failed and we were unable to recover it. 00:26:45.914 [2024-04-26 15:03:28.323469] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.914 [2024-04-26 15:03:28.323543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.914 [2024-04-26 15:03:28.323565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.914 [2024-04-26 15:03:28.323573] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.914 [2024-04-26 15:03:28.323579] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.914 [2024-04-26 15:03:28.323598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.914 qpair failed and we were unable to recover it. 00:26:45.914 [2024-04-26 15:03:28.333572] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.914 [2024-04-26 15:03:28.333645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.914 [2024-04-26 15:03:28.333666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.914 [2024-04-26 15:03:28.333673] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.914 [2024-04-26 15:03:28.333681] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.914 [2024-04-26 15:03:28.333699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.914 qpair failed and we were unable to recover it. 00:26:45.914 [2024-04-26 15:03:28.343577] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.914 [2024-04-26 15:03:28.343650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.914 [2024-04-26 15:03:28.343670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.914 [2024-04-26 15:03:28.343678] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.914 [2024-04-26 15:03:28.343684] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.914 [2024-04-26 15:03:28.343701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.914 qpair failed and we were unable to recover it. 00:26:45.914 [2024-04-26 15:03:28.353617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.914 [2024-04-26 15:03:28.353693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.914 [2024-04-26 15:03:28.353713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.914 [2024-04-26 15:03:28.353720] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.914 [2024-04-26 15:03:28.353726] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.914 [2024-04-26 15:03:28.353743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.914 qpair failed and we were unable to recover it. 00:26:45.914 [2024-04-26 15:03:28.363514] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.914 [2024-04-26 15:03:28.363578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.914 [2024-04-26 15:03:28.363599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.914 [2024-04-26 15:03:28.363606] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.914 [2024-04-26 15:03:28.363613] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.914 [2024-04-26 15:03:28.363634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.914 qpair failed and we were unable to recover it. 00:26:45.914 [2024-04-26 15:03:28.373563] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.914 [2024-04-26 15:03:28.373635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.914 [2024-04-26 15:03:28.373655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.914 [2024-04-26 15:03:28.373662] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.914 [2024-04-26 15:03:28.373669] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.914 [2024-04-26 15:03:28.373685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.914 qpair failed and we were unable to recover it. 00:26:45.914 [2024-04-26 15:03:28.383578] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.914 [2024-04-26 15:03:28.383650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.914 [2024-04-26 15:03:28.383670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.914 [2024-04-26 15:03:28.383678] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.914 [2024-04-26 15:03:28.383684] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.914 [2024-04-26 15:03:28.383701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.914 qpair failed and we were unable to recover it. 00:26:45.914 [2024-04-26 15:03:28.393703] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.914 [2024-04-26 15:03:28.393781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.914 [2024-04-26 15:03:28.393801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.914 [2024-04-26 15:03:28.393808] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.914 [2024-04-26 15:03:28.393814] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.914 [2024-04-26 15:03:28.393831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.914 qpair failed and we were unable to recover it. 00:26:45.914 [2024-04-26 15:03:28.403726] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.914 [2024-04-26 15:03:28.403796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.914 [2024-04-26 15:03:28.403821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.914 [2024-04-26 15:03:28.403828] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.914 [2024-04-26 15:03:28.403835] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.914 [2024-04-26 15:03:28.403858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.914 qpair failed and we were unable to recover it. 00:26:45.914 [2024-04-26 15:03:28.413713] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.914 [2024-04-26 15:03:28.413792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.914 [2024-04-26 15:03:28.413812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.914 [2024-04-26 15:03:28.413820] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.914 [2024-04-26 15:03:28.413826] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.914 [2024-04-26 15:03:28.413850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.914 qpair failed and we were unable to recover it. 00:26:45.914 [2024-04-26 15:03:28.423761] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.914 [2024-04-26 15:03:28.423834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.914 [2024-04-26 15:03:28.423860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.914 [2024-04-26 15:03:28.423867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.914 [2024-04-26 15:03:28.423874] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.914 [2024-04-26 15:03:28.423891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.914 qpair failed and we were unable to recover it. 00:26:45.914 [2024-04-26 15:03:28.433870] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.914 [2024-04-26 15:03:28.433948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.914 [2024-04-26 15:03:28.433967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.914 [2024-04-26 15:03:28.433974] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.914 [2024-04-26 15:03:28.433981] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.914 [2024-04-26 15:03:28.433997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.914 qpair failed and we were unable to recover it. 00:26:45.914 [2024-04-26 15:03:28.443857] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.914 [2024-04-26 15:03:28.443916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.914 [2024-04-26 15:03:28.443937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.914 [2024-04-26 15:03:28.443944] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.914 [2024-04-26 15:03:28.443950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.914 [2024-04-26 15:03:28.443973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.914 qpair failed and we were unable to recover it. 00:26:45.914 [2024-04-26 15:03:28.453911] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.914 [2024-04-26 15:03:28.453983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.914 [2024-04-26 15:03:28.454002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.914 [2024-04-26 15:03:28.454009] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.914 [2024-04-26 15:03:28.454015] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.914 [2024-04-26 15:03:28.454032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.914 qpair failed and we were unable to recover it. 00:26:45.914 [2024-04-26 15:03:28.463935] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.914 [2024-04-26 15:03:28.464048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.914 [2024-04-26 15:03:28.464067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.914 [2024-04-26 15:03:28.464074] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.914 [2024-04-26 15:03:28.464080] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.914 [2024-04-26 15:03:28.464096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.914 qpair failed and we were unable to recover it. 00:26:45.914 [2024-04-26 15:03:28.473954] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.914 [2024-04-26 15:03:28.474075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.914 [2024-04-26 15:03:28.474094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.914 [2024-04-26 15:03:28.474101] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.914 [2024-04-26 15:03:28.474109] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.914 [2024-04-26 15:03:28.474125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.914 qpair failed and we were unable to recover it. 00:26:45.914 [2024-04-26 15:03:28.483906] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.914 [2024-04-26 15:03:28.483967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.914 [2024-04-26 15:03:28.483989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.914 [2024-04-26 15:03:28.483996] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.914 [2024-04-26 15:03:28.484002] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.914 [2024-04-26 15:03:28.484019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.914 qpair failed and we were unable to recover it. 00:26:45.914 [2024-04-26 15:03:28.494049] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.915 [2024-04-26 15:03:28.494122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.915 [2024-04-26 15:03:28.494147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.915 [2024-04-26 15:03:28.494154] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.915 [2024-04-26 15:03:28.494160] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.915 [2024-04-26 15:03:28.494177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.915 qpair failed and we were unable to recover it. 00:26:45.915 [2024-04-26 15:03:28.503937] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.915 [2024-04-26 15:03:28.504005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.915 [2024-04-26 15:03:28.504024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.915 [2024-04-26 15:03:28.504031] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.915 [2024-04-26 15:03:28.504037] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.915 [2024-04-26 15:03:28.504053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.915 qpair failed and we were unable to recover it. 00:26:45.915 [2024-04-26 15:03:28.514120] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.915 [2024-04-26 15:03:28.514207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.915 [2024-04-26 15:03:28.514228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.915 [2024-04-26 15:03:28.514235] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.915 [2024-04-26 15:03:28.514241] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.915 [2024-04-26 15:03:28.514258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.915 qpair failed and we were unable to recover it. 00:26:45.915 [2024-04-26 15:03:28.524148] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.915 [2024-04-26 15:03:28.524212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.915 [2024-04-26 15:03:28.524231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.915 [2024-04-26 15:03:28.524239] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.915 [2024-04-26 15:03:28.524246] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.915 [2024-04-26 15:03:28.524263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.915 qpair failed and we were unable to recover it. 00:26:45.915 [2024-04-26 15:03:28.534054] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.915 [2024-04-26 15:03:28.534115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.915 [2024-04-26 15:03:28.534134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.915 [2024-04-26 15:03:28.534141] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.915 [2024-04-26 15:03:28.534154] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.915 [2024-04-26 15:03:28.534177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.915 qpair failed and we were unable to recover it. 00:26:45.915 [2024-04-26 15:03:28.544191] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.915 [2024-04-26 15:03:28.544255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.915 [2024-04-26 15:03:28.544276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.915 [2024-04-26 15:03:28.544284] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.915 [2024-04-26 15:03:28.544290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.915 [2024-04-26 15:03:28.544307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.915 qpair failed and we were unable to recover it. 00:26:45.915 [2024-04-26 15:03:28.554243] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.915 [2024-04-26 15:03:28.554315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.915 [2024-04-26 15:03:28.554334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.915 [2024-04-26 15:03:28.554341] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.915 [2024-04-26 15:03:28.554347] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.915 [2024-04-26 15:03:28.554365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.915 qpair failed and we were unable to recover it. 00:26:45.915 [2024-04-26 15:03:28.564240] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.915 [2024-04-26 15:03:28.564305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.915 [2024-04-26 15:03:28.564324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.915 [2024-04-26 15:03:28.564333] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.915 [2024-04-26 15:03:28.564340] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.915 [2024-04-26 15:03:28.564356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.915 qpair failed and we were unable to recover it. 00:26:45.915 [2024-04-26 15:03:28.574339] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.915 [2024-04-26 15:03:28.574429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.915 [2024-04-26 15:03:28.574448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.915 [2024-04-26 15:03:28.574456] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.915 [2024-04-26 15:03:28.574462] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:45.915 [2024-04-26 15:03:28.574479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.915 qpair failed and we were unable to recover it. 00:26:46.177 [2024-04-26 15:03:28.584307] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.177 [2024-04-26 15:03:28.584378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.177 [2024-04-26 15:03:28.584398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.177 [2024-04-26 15:03:28.584405] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.177 [2024-04-26 15:03:28.584411] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.177 [2024-04-26 15:03:28.584428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-04-26 15:03:28.594369] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.177 [2024-04-26 15:03:28.594440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.177 [2024-04-26 15:03:28.594460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.177 [2024-04-26 15:03:28.594467] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.177 [2024-04-26 15:03:28.594474] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.177 [2024-04-26 15:03:28.594490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-04-26 15:03:28.604279] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.177 [2024-04-26 15:03:28.604352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.177 [2024-04-26 15:03:28.604370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.177 [2024-04-26 15:03:28.604377] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.177 [2024-04-26 15:03:28.604384] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.177 [2024-04-26 15:03:28.604400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-04-26 15:03:28.614391] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.177 [2024-04-26 15:03:28.614509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.177 [2024-04-26 15:03:28.614530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.177 [2024-04-26 15:03:28.614537] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.177 [2024-04-26 15:03:28.614543] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.177 [2024-04-26 15:03:28.614559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-04-26 15:03:28.624418] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.177 [2024-04-26 15:03:28.624483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.177 [2024-04-26 15:03:28.624502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.177 [2024-04-26 15:03:28.624520] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.177 [2024-04-26 15:03:28.624526] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.177 [2024-04-26 15:03:28.624542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-04-26 15:03:28.634446] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.177 [2024-04-26 15:03:28.634542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.177 [2024-04-26 15:03:28.634562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.177 [2024-04-26 15:03:28.634569] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.177 [2024-04-26 15:03:28.634575] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.177 [2024-04-26 15:03:28.634592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-04-26 15:03:28.644369] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.177 [2024-04-26 15:03:28.644438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.177 [2024-04-26 15:03:28.644460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.177 [2024-04-26 15:03:28.644468] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.177 [2024-04-26 15:03:28.644474] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.177 [2024-04-26 15:03:28.644491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-04-26 15:03:28.654523] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.177 [2024-04-26 15:03:28.654594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.177 [2024-04-26 15:03:28.654615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.177 [2024-04-26 15:03:28.654622] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.177 [2024-04-26 15:03:28.654628] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.177 [2024-04-26 15:03:28.654644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-04-26 15:03:28.664531] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.177 [2024-04-26 15:03:28.664594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.177 [2024-04-26 15:03:28.664613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.177 [2024-04-26 15:03:28.664620] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.177 [2024-04-26 15:03:28.664627] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.177 [2024-04-26 15:03:28.664643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-04-26 15:03:28.674594] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.177 [2024-04-26 15:03:28.674679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.177 [2024-04-26 15:03:28.674698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.177 [2024-04-26 15:03:28.674705] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.177 [2024-04-26 15:03:28.674711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.177 [2024-04-26 15:03:28.674727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-04-26 15:03:28.684637] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.177 [2024-04-26 15:03:28.684709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.177 [2024-04-26 15:03:28.684728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.177 [2024-04-26 15:03:28.684735] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.177 [2024-04-26 15:03:28.684741] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.177 [2024-04-26 15:03:28.684757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-04-26 15:03:28.694617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.177 [2024-04-26 15:03:28.694689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.177 [2024-04-26 15:03:28.694708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.177 [2024-04-26 15:03:28.694715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.177 [2024-04-26 15:03:28.694721] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.177 [2024-04-26 15:03:28.694737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-04-26 15:03:28.704666] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.177 [2024-04-26 15:03:28.704730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.177 [2024-04-26 15:03:28.704749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.177 [2024-04-26 15:03:28.704756] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.177 [2024-04-26 15:03:28.704762] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.178 [2024-04-26 15:03:28.704778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-04-26 15:03:28.714595] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.178 [2024-04-26 15:03:28.714668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.178 [2024-04-26 15:03:28.714687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.178 [2024-04-26 15:03:28.714700] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.178 [2024-04-26 15:03:28.714707] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.178 [2024-04-26 15:03:28.714722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-04-26 15:03:28.724765] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.178 [2024-04-26 15:03:28.724844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.178 [2024-04-26 15:03:28.724864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.178 [2024-04-26 15:03:28.724872] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.178 [2024-04-26 15:03:28.724878] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.178 [2024-04-26 15:03:28.724894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-04-26 15:03:28.734769] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.178 [2024-04-26 15:03:28.734829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.178 [2024-04-26 15:03:28.734859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.178 [2024-04-26 15:03:28.734866] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.178 [2024-04-26 15:03:28.734872] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.178 [2024-04-26 15:03:28.734889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-04-26 15:03:28.744880] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.178 [2024-04-26 15:03:28.744959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.178 [2024-04-26 15:03:28.744981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.178 [2024-04-26 15:03:28.744992] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.178 [2024-04-26 15:03:28.744998] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.178 [2024-04-26 15:03:28.745017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-04-26 15:03:28.754823] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.178 [2024-04-26 15:03:28.754907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.178 [2024-04-26 15:03:28.754927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.178 [2024-04-26 15:03:28.754935] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.178 [2024-04-26 15:03:28.754941] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.178 [2024-04-26 15:03:28.754958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-04-26 15:03:28.764831] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.178 [2024-04-26 15:03:28.764909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.178 [2024-04-26 15:03:28.764928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.178 [2024-04-26 15:03:28.764935] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.178 [2024-04-26 15:03:28.764942] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.178 [2024-04-26 15:03:28.764958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-04-26 15:03:28.774887] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.178 [2024-04-26 15:03:28.774960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.178 [2024-04-26 15:03:28.774980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.178 [2024-04-26 15:03:28.774988] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.178 [2024-04-26 15:03:28.774994] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.178 [2024-04-26 15:03:28.775011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-04-26 15:03:28.784806] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.178 [2024-04-26 15:03:28.784882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.178 [2024-04-26 15:03:28.784901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.178 [2024-04-26 15:03:28.784908] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.178 [2024-04-26 15:03:28.784914] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.178 [2024-04-26 15:03:28.784930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-04-26 15:03:28.794948] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.178 [2024-04-26 15:03:28.795029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.178 [2024-04-26 15:03:28.795048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.178 [2024-04-26 15:03:28.795055] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.178 [2024-04-26 15:03:28.795061] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.178 [2024-04-26 15:03:28.795077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-04-26 15:03:28.804970] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.178 [2024-04-26 15:03:28.805053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.178 [2024-04-26 15:03:28.805077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.178 [2024-04-26 15:03:28.805084] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.178 [2024-04-26 15:03:28.805090] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.178 [2024-04-26 15:03:28.805107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-04-26 15:03:28.814996] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.178 [2024-04-26 15:03:28.815066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.178 [2024-04-26 15:03:28.815086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.178 [2024-04-26 15:03:28.815093] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.178 [2024-04-26 15:03:28.815100] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.178 [2024-04-26 15:03:28.815116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-04-26 15:03:28.824951] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.178 [2024-04-26 15:03:28.825017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.178 [2024-04-26 15:03:28.825036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.178 [2024-04-26 15:03:28.825044] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.178 [2024-04-26 15:03:28.825050] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.178 [2024-04-26 15:03:28.825065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-04-26 15:03:28.835030] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.178 [2024-04-26 15:03:28.835111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.178 [2024-04-26 15:03:28.835132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.178 [2024-04-26 15:03:28.835139] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.178 [2024-04-26 15:03:28.835145] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.178 [2024-04-26 15:03:28.835163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.440 [2024-04-26 15:03:28.844978] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.440 [2024-04-26 15:03:28.845053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.440 [2024-04-26 15:03:28.845074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.440 [2024-04-26 15:03:28.845082] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.440 [2024-04-26 15:03:28.845089] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.440 [2024-04-26 15:03:28.845112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.440 qpair failed and we were unable to recover it. 00:26:46.440 [2024-04-26 15:03:28.855159] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.440 [2024-04-26 15:03:28.855236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.440 [2024-04-26 15:03:28.855255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.440 [2024-04-26 15:03:28.855263] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.440 [2024-04-26 15:03:28.855269] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.440 [2024-04-26 15:03:28.855285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.440 qpair failed and we were unable to recover it. 00:26:46.440 [2024-04-26 15:03:28.865201] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.440 [2024-04-26 15:03:28.865263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.440 [2024-04-26 15:03:28.865282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.440 [2024-04-26 15:03:28.865289] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.440 [2024-04-26 15:03:28.865296] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.440 [2024-04-26 15:03:28.865311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.440 qpair failed and we were unable to recover it. 00:26:46.440 [2024-04-26 15:03:28.875100] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.440 [2024-04-26 15:03:28.875179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.440 [2024-04-26 15:03:28.875198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.440 [2024-04-26 15:03:28.875205] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.440 [2024-04-26 15:03:28.875211] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.440 [2024-04-26 15:03:28.875227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.440 qpair failed and we were unable to recover it. 00:26:46.440 [2024-04-26 15:03:28.885248] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.440 [2024-04-26 15:03:28.885328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.440 [2024-04-26 15:03:28.885346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.440 [2024-04-26 15:03:28.885353] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.440 [2024-04-26 15:03:28.885360] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.440 [2024-04-26 15:03:28.885375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.440 qpair failed and we were unable to recover it. 00:26:46.440 [2024-04-26 15:03:28.895270] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.441 [2024-04-26 15:03:28.895336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.441 [2024-04-26 15:03:28.895360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.441 [2024-04-26 15:03:28.895367] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.441 [2024-04-26 15:03:28.895373] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.441 [2024-04-26 15:03:28.895389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.441 qpair failed and we were unable to recover it. 00:26:46.441 [2024-04-26 15:03:28.905305] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.441 [2024-04-26 15:03:28.905386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.441 [2024-04-26 15:03:28.905405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.441 [2024-04-26 15:03:28.905412] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.441 [2024-04-26 15:03:28.905418] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.441 [2024-04-26 15:03:28.905434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.441 qpair failed and we were unable to recover it. 00:26:46.441 [2024-04-26 15:03:28.915357] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.441 [2024-04-26 15:03:28.915442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.441 [2024-04-26 15:03:28.915462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.441 [2024-04-26 15:03:28.915469] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.441 [2024-04-26 15:03:28.915475] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.441 [2024-04-26 15:03:28.915491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.441 qpair failed and we were unable to recover it. 00:26:46.441 [2024-04-26 15:03:28.925260] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.441 [2024-04-26 15:03:28.925319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.441 [2024-04-26 15:03:28.925338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.441 [2024-04-26 15:03:28.925346] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.441 [2024-04-26 15:03:28.925352] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.441 [2024-04-26 15:03:28.925367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.441 qpair failed and we were unable to recover it. 00:26:46.441 [2024-04-26 15:03:28.935410] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.441 [2024-04-26 15:03:28.935496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.441 [2024-04-26 15:03:28.935516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.441 [2024-04-26 15:03:28.935523] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.441 [2024-04-26 15:03:28.935535] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.441 [2024-04-26 15:03:28.935552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.441 qpair failed and we were unable to recover it. 00:26:46.441 [2024-04-26 15:03:28.945445] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.441 [2024-04-26 15:03:28.945511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.441 [2024-04-26 15:03:28.945532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.441 [2024-04-26 15:03:28.945539] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.441 [2024-04-26 15:03:28.945545] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.441 [2024-04-26 15:03:28.945562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.441 qpair failed and we were unable to recover it. 00:26:46.441 [2024-04-26 15:03:28.955495] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.441 [2024-04-26 15:03:28.955564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.441 [2024-04-26 15:03:28.955585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.441 [2024-04-26 15:03:28.955593] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.441 [2024-04-26 15:03:28.955599] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.441 [2024-04-26 15:03:28.955615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.441 qpair failed and we were unable to recover it. 00:26:46.441 [2024-04-26 15:03:28.965507] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.441 [2024-04-26 15:03:28.965577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.441 [2024-04-26 15:03:28.965596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.441 [2024-04-26 15:03:28.965603] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.441 [2024-04-26 15:03:28.965610] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.441 [2024-04-26 15:03:28.965626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.441 qpair failed and we were unable to recover it. 00:26:46.441 [2024-04-26 15:03:28.975550] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.441 [2024-04-26 15:03:28.975644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.441 [2024-04-26 15:03:28.975663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.441 [2024-04-26 15:03:28.975671] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.441 [2024-04-26 15:03:28.975677] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.441 [2024-04-26 15:03:28.975693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.441 qpair failed and we were unable to recover it. 00:26:46.441 [2024-04-26 15:03:28.985565] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.441 [2024-04-26 15:03:28.985642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.441 [2024-04-26 15:03:28.985662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.441 [2024-04-26 15:03:28.985669] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.441 [2024-04-26 15:03:28.985676] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.441 [2024-04-26 15:03:28.985691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.441 qpair failed and we were unable to recover it. 00:26:46.441 [2024-04-26 15:03:28.995612] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.441 [2024-04-26 15:03:28.995716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.442 [2024-04-26 15:03:28.995736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.442 [2024-04-26 15:03:28.995743] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.442 [2024-04-26 15:03:28.995749] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.442 [2024-04-26 15:03:28.995765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.442 qpair failed and we were unable to recover it. 00:26:46.442 [2024-04-26 15:03:29.005651] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.442 [2024-04-26 15:03:29.005724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.442 [2024-04-26 15:03:29.005743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.442 [2024-04-26 15:03:29.005750] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.442 [2024-04-26 15:03:29.005757] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.442 [2024-04-26 15:03:29.005773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.442 qpair failed and we were unable to recover it. 00:26:46.442 [2024-04-26 15:03:29.015685] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.442 [2024-04-26 15:03:29.015790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.442 [2024-04-26 15:03:29.015810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.442 [2024-04-26 15:03:29.015818] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.442 [2024-04-26 15:03:29.015825] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.442 [2024-04-26 15:03:29.015848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.442 qpair failed and we were unable to recover it. 00:26:46.442 [2024-04-26 15:03:29.025688] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.442 [2024-04-26 15:03:29.025751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.442 [2024-04-26 15:03:29.025770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.442 [2024-04-26 15:03:29.025783] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.442 [2024-04-26 15:03:29.025790] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.442 [2024-04-26 15:03:29.025806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.442 qpair failed and we were unable to recover it. 00:26:46.442 [2024-04-26 15:03:29.035741] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.442 [2024-04-26 15:03:29.035830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.442 [2024-04-26 15:03:29.035856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.442 [2024-04-26 15:03:29.035864] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.442 [2024-04-26 15:03:29.035870] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.442 [2024-04-26 15:03:29.035887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.442 qpair failed and we were unable to recover it. 00:26:46.442 [2024-04-26 15:03:29.045678] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.442 [2024-04-26 15:03:29.045800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.442 [2024-04-26 15:03:29.045820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.442 [2024-04-26 15:03:29.045827] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.442 [2024-04-26 15:03:29.045834] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.442 [2024-04-26 15:03:29.045859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.442 qpair failed and we were unable to recover it. 00:26:46.442 [2024-04-26 15:03:29.055817] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.442 [2024-04-26 15:03:29.055895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.442 [2024-04-26 15:03:29.055916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.442 [2024-04-26 15:03:29.055923] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.442 [2024-04-26 15:03:29.055929] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.442 [2024-04-26 15:03:29.055946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.442 qpair failed and we were unable to recover it. 00:26:46.442 [2024-04-26 15:03:29.065846] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.442 [2024-04-26 15:03:29.065914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.442 [2024-04-26 15:03:29.065933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.442 [2024-04-26 15:03:29.065940] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.442 [2024-04-26 15:03:29.065947] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.442 [2024-04-26 15:03:29.065963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.442 qpair failed and we were unable to recover it. 00:26:46.442 [2024-04-26 15:03:29.075891] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.442 [2024-04-26 15:03:29.076027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.442 [2024-04-26 15:03:29.076046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.442 [2024-04-26 15:03:29.076053] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.442 [2024-04-26 15:03:29.076059] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.442 [2024-04-26 15:03:29.076075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.442 qpair failed and we were unable to recover it. 00:26:46.442 [2024-04-26 15:03:29.085888] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.442 [2024-04-26 15:03:29.085954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.442 [2024-04-26 15:03:29.085974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.442 [2024-04-26 15:03:29.085981] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.442 [2024-04-26 15:03:29.085987] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.442 [2024-04-26 15:03:29.086003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.442 qpair failed and we were unable to recover it. 00:26:46.442 [2024-04-26 15:03:29.095944] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.442 [2024-04-26 15:03:29.096016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.442 [2024-04-26 15:03:29.096035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.442 [2024-04-26 15:03:29.096043] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.442 [2024-04-26 15:03:29.096049] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.442 [2024-04-26 15:03:29.096064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.442 qpair failed and we were unable to recover it. 00:26:46.705 [2024-04-26 15:03:29.105949] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.705 [2024-04-26 15:03:29.106017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.705 [2024-04-26 15:03:29.106036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.705 [2024-04-26 15:03:29.106044] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.705 [2024-04-26 15:03:29.106050] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.705 [2024-04-26 15:03:29.106066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.705 qpair failed and we were unable to recover it. 00:26:46.705 [2024-04-26 15:03:29.115994] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.705 [2024-04-26 15:03:29.116075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.705 [2024-04-26 15:03:29.116095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.705 [2024-04-26 15:03:29.116109] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.705 [2024-04-26 15:03:29.116116] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.705 [2024-04-26 15:03:29.116132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.705 qpair failed and we were unable to recover it. 00:26:46.705 [2024-04-26 15:03:29.126025] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.705 [2024-04-26 15:03:29.126099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.705 [2024-04-26 15:03:29.126119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.705 [2024-04-26 15:03:29.126126] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.705 [2024-04-26 15:03:29.126133] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.705 [2024-04-26 15:03:29.126149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.705 qpair failed and we were unable to recover it. 00:26:46.705 [2024-04-26 15:03:29.136066] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.705 [2024-04-26 15:03:29.136177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.705 [2024-04-26 15:03:29.136197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.705 [2024-04-26 15:03:29.136204] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.705 [2024-04-26 15:03:29.136211] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.705 [2024-04-26 15:03:29.136227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.705 qpair failed and we were unable to recover it. 00:26:46.705 [2024-04-26 15:03:29.145979] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.705 [2024-04-26 15:03:29.146070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.705 [2024-04-26 15:03:29.146090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.705 [2024-04-26 15:03:29.146098] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.705 [2024-04-26 15:03:29.146104] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.705 [2024-04-26 15:03:29.146120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.705 qpair failed and we were unable to recover it. 00:26:46.705 [2024-04-26 15:03:29.156154] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.705 [2024-04-26 15:03:29.156235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.705 [2024-04-26 15:03:29.156254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.705 [2024-04-26 15:03:29.156262] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.705 [2024-04-26 15:03:29.156268] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.705 [2024-04-26 15:03:29.156283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.705 qpair failed and we were unable to recover it. 00:26:46.705 [2024-04-26 15:03:29.166132] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.705 [2024-04-26 15:03:29.166190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.705 [2024-04-26 15:03:29.166209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.705 [2024-04-26 15:03:29.166216] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.705 [2024-04-26 15:03:29.166222] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.705 [2024-04-26 15:03:29.166238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.705 qpair failed and we were unable to recover it. 00:26:46.705 [2024-04-26 15:03:29.176195] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.705 [2024-04-26 15:03:29.176267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.705 [2024-04-26 15:03:29.176286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.705 [2024-04-26 15:03:29.176293] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.705 [2024-04-26 15:03:29.176300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.705 [2024-04-26 15:03:29.176315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.705 qpair failed and we were unable to recover it. 00:26:46.705 [2024-04-26 15:03:29.186216] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.705 [2024-04-26 15:03:29.186337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.705 [2024-04-26 15:03:29.186359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.705 [2024-04-26 15:03:29.186371] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.705 [2024-04-26 15:03:29.186377] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.705 [2024-04-26 15:03:29.186394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.705 qpair failed and we were unable to recover it. 00:26:46.705 [2024-04-26 15:03:29.196142] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.705 [2024-04-26 15:03:29.196215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.705 [2024-04-26 15:03:29.196235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.705 [2024-04-26 15:03:29.196242] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.705 [2024-04-26 15:03:29.196248] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.705 [2024-04-26 15:03:29.196265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.705 qpair failed and we were unable to recover it. 00:26:46.705 [2024-04-26 15:03:29.206274] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.706 [2024-04-26 15:03:29.206386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.706 [2024-04-26 15:03:29.206410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.706 [2024-04-26 15:03:29.206419] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.706 [2024-04-26 15:03:29.206426] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.706 [2024-04-26 15:03:29.206442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.706 qpair failed and we were unable to recover it. 00:26:46.706 [2024-04-26 15:03:29.216289] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.706 [2024-04-26 15:03:29.216361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.706 [2024-04-26 15:03:29.216381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.706 [2024-04-26 15:03:29.216388] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.706 [2024-04-26 15:03:29.216394] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.706 [2024-04-26 15:03:29.216410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.706 qpair failed and we were unable to recover it. 00:26:46.706 [2024-04-26 15:03:29.226332] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.706 [2024-04-26 15:03:29.226401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.706 [2024-04-26 15:03:29.226420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.706 [2024-04-26 15:03:29.226427] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.706 [2024-04-26 15:03:29.226434] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.706 [2024-04-26 15:03:29.226449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.706 qpair failed and we were unable to recover it. 00:26:46.706 [2024-04-26 15:03:29.236384] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.706 [2024-04-26 15:03:29.236465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.706 [2024-04-26 15:03:29.236483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.706 [2024-04-26 15:03:29.236490] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.706 [2024-04-26 15:03:29.236497] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.706 [2024-04-26 15:03:29.236512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.706 qpair failed and we were unable to recover it. 00:26:46.706 [2024-04-26 15:03:29.246392] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.706 [2024-04-26 15:03:29.246457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.706 [2024-04-26 15:03:29.246478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.706 [2024-04-26 15:03:29.246485] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.706 [2024-04-26 15:03:29.246492] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.706 [2024-04-26 15:03:29.246514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.706 qpair failed and we were unable to recover it. 00:26:46.706 [2024-04-26 15:03:29.256459] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.706 [2024-04-26 15:03:29.256524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.706 [2024-04-26 15:03:29.256543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.706 [2024-04-26 15:03:29.256550] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.706 [2024-04-26 15:03:29.256556] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.706 [2024-04-26 15:03:29.256572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.706 qpair failed and we were unable to recover it. 00:26:46.706 [2024-04-26 15:03:29.266340] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.706 [2024-04-26 15:03:29.266404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.706 [2024-04-26 15:03:29.266426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.706 [2024-04-26 15:03:29.266433] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.706 [2024-04-26 15:03:29.266439] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.706 [2024-04-26 15:03:29.266456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.706 qpair failed and we were unable to recover it. 00:26:46.706 [2024-04-26 15:03:29.276374] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.706 [2024-04-26 15:03:29.276444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.706 [2024-04-26 15:03:29.276464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.706 [2024-04-26 15:03:29.276472] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.706 [2024-04-26 15:03:29.276478] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.706 [2024-04-26 15:03:29.276494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.706 qpair failed and we were unable to recover it. 00:26:46.706 [2024-04-26 15:03:29.286503] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.706 [2024-04-26 15:03:29.286574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.706 [2024-04-26 15:03:29.286593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.706 [2024-04-26 15:03:29.286601] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.706 [2024-04-26 15:03:29.286607] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.706 [2024-04-26 15:03:29.286622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.706 qpair failed and we were unable to recover it. 00:26:46.706 [2024-04-26 15:03:29.296601] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.706 [2024-04-26 15:03:29.296662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.706 [2024-04-26 15:03:29.296686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.706 [2024-04-26 15:03:29.296693] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.706 [2024-04-26 15:03:29.296699] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.706 [2024-04-26 15:03:29.296715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.706 qpair failed and we were unable to recover it. 00:26:46.706 [2024-04-26 15:03:29.306617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.706 [2024-04-26 15:03:29.306729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.706 [2024-04-26 15:03:29.306748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.706 [2024-04-26 15:03:29.306755] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.706 [2024-04-26 15:03:29.306762] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.706 [2024-04-26 15:03:29.306779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.706 qpair failed and we were unable to recover it. 00:26:46.706 [2024-04-26 15:03:29.316616] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.706 [2024-04-26 15:03:29.316704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.706 [2024-04-26 15:03:29.316724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.706 [2024-04-26 15:03:29.316732] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.706 [2024-04-26 15:03:29.316739] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.706 [2024-04-26 15:03:29.316756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.706 qpair failed and we were unable to recover it. 00:26:46.706 [2024-04-26 15:03:29.326613] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.706 [2024-04-26 15:03:29.326674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.706 [2024-04-26 15:03:29.326692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.706 [2024-04-26 15:03:29.326699] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.706 [2024-04-26 15:03:29.326706] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.706 [2024-04-26 15:03:29.326721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.706 qpair failed and we were unable to recover it. 00:26:46.706 [2024-04-26 15:03:29.336669] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.706 [2024-04-26 15:03:29.336752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.706 [2024-04-26 15:03:29.336771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.706 [2024-04-26 15:03:29.336780] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.706 [2024-04-26 15:03:29.336797] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.706 [2024-04-26 15:03:29.336814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.706 qpair failed and we were unable to recover it. 00:26:46.706 [2024-04-26 15:03:29.346757] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.706 [2024-04-26 15:03:29.346829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.706 [2024-04-26 15:03:29.346856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.706 [2024-04-26 15:03:29.346863] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.706 [2024-04-26 15:03:29.346869] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.706 [2024-04-26 15:03:29.346887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.706 qpair failed and we were unable to recover it. 00:26:46.706 [2024-04-26 15:03:29.356748] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.706 [2024-04-26 15:03:29.356833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.706 [2024-04-26 15:03:29.356857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.706 [2024-04-26 15:03:29.356865] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.706 [2024-04-26 15:03:29.356871] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.706 [2024-04-26 15:03:29.356887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.706 qpair failed and we were unable to recover it. 00:26:46.706 [2024-04-26 15:03:29.366774] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.706 [2024-04-26 15:03:29.366855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.706 [2024-04-26 15:03:29.366874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.706 [2024-04-26 15:03:29.366881] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.706 [2024-04-26 15:03:29.366887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.706 [2024-04-26 15:03:29.366903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.706 qpair failed and we were unable to recover it. 00:26:46.993 [2024-04-26 15:03:29.376791] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.993 [2024-04-26 15:03:29.376867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.993 [2024-04-26 15:03:29.376886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.993 [2024-04-26 15:03:29.376896] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.993 [2024-04-26 15:03:29.376903] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.993 [2024-04-26 15:03:29.376921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-04-26 15:03:29.386842] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.993 [2024-04-26 15:03:29.386960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.993 [2024-04-26 15:03:29.386981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.993 [2024-04-26 15:03:29.386988] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.993 [2024-04-26 15:03:29.386995] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.993 [2024-04-26 15:03:29.387011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-04-26 15:03:29.396895] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.993 [2024-04-26 15:03:29.396990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.993 [2024-04-26 15:03:29.397009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.993 [2024-04-26 15:03:29.397016] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.993 [2024-04-26 15:03:29.397023] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.993 [2024-04-26 15:03:29.397039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-04-26 15:03:29.406900] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.993 [2024-04-26 15:03:29.406971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.993 [2024-04-26 15:03:29.406990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.993 [2024-04-26 15:03:29.406997] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.993 [2024-04-26 15:03:29.407004] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.993 [2024-04-26 15:03:29.407020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-04-26 15:03:29.416785] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.993 [2024-04-26 15:03:29.416866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.993 [2024-04-26 15:03:29.416885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.993 [2024-04-26 15:03:29.416893] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.993 [2024-04-26 15:03:29.416900] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.993 [2024-04-26 15:03:29.416917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-04-26 15:03:29.426981] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.993 [2024-04-26 15:03:29.427045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.993 [2024-04-26 15:03:29.427064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.993 [2024-04-26 15:03:29.427071] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.993 [2024-04-26 15:03:29.427083] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.993 [2024-04-26 15:03:29.427099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-04-26 15:03:29.437005] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.993 [2024-04-26 15:03:29.437075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.993 [2024-04-26 15:03:29.437094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.993 [2024-04-26 15:03:29.437101] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.993 [2024-04-26 15:03:29.437107] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.993 [2024-04-26 15:03:29.437124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.994 [2024-04-26 15:03:29.447046] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.994 [2024-04-26 15:03:29.447149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.994 [2024-04-26 15:03:29.447170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.994 [2024-04-26 15:03:29.447177] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.994 [2024-04-26 15:03:29.447183] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.994 [2024-04-26 15:03:29.447200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-04-26 15:03:29.457081] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.994 [2024-04-26 15:03:29.457189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.994 [2024-04-26 15:03:29.457208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.994 [2024-04-26 15:03:29.457216] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.994 [2024-04-26 15:03:29.457223] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.994 [2024-04-26 15:03:29.457239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-04-26 15:03:29.467073] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.994 [2024-04-26 15:03:29.467140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.994 [2024-04-26 15:03:29.467160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.994 [2024-04-26 15:03:29.467167] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.994 [2024-04-26 15:03:29.467173] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.994 [2024-04-26 15:03:29.467189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-04-26 15:03:29.477097] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.994 [2024-04-26 15:03:29.477219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.994 [2024-04-26 15:03:29.477238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.994 [2024-04-26 15:03:29.477245] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.994 [2024-04-26 15:03:29.477251] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.994 [2024-04-26 15:03:29.477267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-04-26 15:03:29.487127] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.994 [2024-04-26 15:03:29.487187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.994 [2024-04-26 15:03:29.487206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.994 [2024-04-26 15:03:29.487213] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.994 [2024-04-26 15:03:29.487219] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.994 [2024-04-26 15:03:29.487235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-04-26 15:03:29.497166] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.994 [2024-04-26 15:03:29.497247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.994 [2024-04-26 15:03:29.497265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.994 [2024-04-26 15:03:29.497272] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.994 [2024-04-26 15:03:29.497278] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.994 [2024-04-26 15:03:29.497294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-04-26 15:03:29.507110] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.994 [2024-04-26 15:03:29.507185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.994 [2024-04-26 15:03:29.507205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.994 [2024-04-26 15:03:29.507212] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.994 [2024-04-26 15:03:29.507218] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.994 [2024-04-26 15:03:29.507240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-04-26 15:03:29.517241] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.994 [2024-04-26 15:03:29.517322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.994 [2024-04-26 15:03:29.517344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.994 [2024-04-26 15:03:29.517361] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.994 [2024-04-26 15:03:29.517367] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.994 [2024-04-26 15:03:29.517384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-04-26 15:03:29.527131] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.994 [2024-04-26 15:03:29.527221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.994 [2024-04-26 15:03:29.527241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.994 [2024-04-26 15:03:29.527249] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.994 [2024-04-26 15:03:29.527255] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.994 [2024-04-26 15:03:29.527271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-04-26 15:03:29.537160] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.994 [2024-04-26 15:03:29.537266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.994 [2024-04-26 15:03:29.537286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.994 [2024-04-26 15:03:29.537293] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.994 [2024-04-26 15:03:29.537300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.994 [2024-04-26 15:03:29.537315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-04-26 15:03:29.547365] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.994 [2024-04-26 15:03:29.547459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.994 [2024-04-26 15:03:29.547480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.994 [2024-04-26 15:03:29.547487] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.994 [2024-04-26 15:03:29.547494] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.994 [2024-04-26 15:03:29.547511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-04-26 15:03:29.557341] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.994 [2024-04-26 15:03:29.557411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.994 [2024-04-26 15:03:29.557430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.994 [2024-04-26 15:03:29.557438] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.994 [2024-04-26 15:03:29.557444] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.994 [2024-04-26 15:03:29.557460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-04-26 15:03:29.567370] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.994 [2024-04-26 15:03:29.567450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.994 [2024-04-26 15:03:29.567469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.994 [2024-04-26 15:03:29.567476] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.994 [2024-04-26 15:03:29.567482] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.994 [2024-04-26 15:03:29.567498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-04-26 15:03:29.577487] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.994 [2024-04-26 15:03:29.577561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.995 [2024-04-26 15:03:29.577580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.995 [2024-04-26 15:03:29.577587] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.995 [2024-04-26 15:03:29.577593] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.995 [2024-04-26 15:03:29.577609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-04-26 15:03:29.587477] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.995 [2024-04-26 15:03:29.587542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.995 [2024-04-26 15:03:29.587561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.995 [2024-04-26 15:03:29.587569] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.995 [2024-04-26 15:03:29.587575] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.995 [2024-04-26 15:03:29.587590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-04-26 15:03:29.597341] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.995 [2024-04-26 15:03:29.597420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.995 [2024-04-26 15:03:29.597441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.995 [2024-04-26 15:03:29.597448] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.995 [2024-04-26 15:03:29.597455] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.995 [2024-04-26 15:03:29.597471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-04-26 15:03:29.607548] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.995 [2024-04-26 15:03:29.607631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.995 [2024-04-26 15:03:29.607656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.995 [2024-04-26 15:03:29.607664] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.995 [2024-04-26 15:03:29.607670] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.995 [2024-04-26 15:03:29.607686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-04-26 15:03:29.617500] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.995 [2024-04-26 15:03:29.617575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.995 [2024-04-26 15:03:29.617596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.995 [2024-04-26 15:03:29.617603] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.995 [2024-04-26 15:03:29.617609] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.995 [2024-04-26 15:03:29.617625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-04-26 15:03:29.627558] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.995 [2024-04-26 15:03:29.627619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.995 [2024-04-26 15:03:29.627639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.995 [2024-04-26 15:03:29.627647] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.995 [2024-04-26 15:03:29.627653] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.995 [2024-04-26 15:03:29.627669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-04-26 15:03:29.637596] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.995 [2024-04-26 15:03:29.637674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.995 [2024-04-26 15:03:29.637693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.995 [2024-04-26 15:03:29.637700] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.995 [2024-04-26 15:03:29.637706] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.995 [2024-04-26 15:03:29.637722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-04-26 15:03:29.647613] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.995 [2024-04-26 15:03:29.647690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.995 [2024-04-26 15:03:29.647710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.995 [2024-04-26 15:03:29.647717] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.995 [2024-04-26 15:03:29.647724] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:46.995 [2024-04-26 15:03:29.647745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.995 qpair failed and we were unable to recover it. 00:26:47.258 [2024-04-26 15:03:29.657537] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.258 [2024-04-26 15:03:29.657603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.258 [2024-04-26 15:03:29.657624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.258 [2024-04-26 15:03:29.657632] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.258 [2024-04-26 15:03:29.657638] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.258 [2024-04-26 15:03:29.657654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-04-26 15:03:29.667665] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.258 [2024-04-26 15:03:29.667738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.258 [2024-04-26 15:03:29.667757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.258 [2024-04-26 15:03:29.667765] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.258 [2024-04-26 15:03:29.667771] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.258 [2024-04-26 15:03:29.667786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-04-26 15:03:29.677733] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.258 [2024-04-26 15:03:29.677808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.258 [2024-04-26 15:03:29.677827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.258 [2024-04-26 15:03:29.677834] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.258 [2024-04-26 15:03:29.677846] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.258 [2024-04-26 15:03:29.677862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-04-26 15:03:29.687778] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.258 [2024-04-26 15:03:29.687844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.258 [2024-04-26 15:03:29.687864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.258 [2024-04-26 15:03:29.687872] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.258 [2024-04-26 15:03:29.687878] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.258 [2024-04-26 15:03:29.687894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-04-26 15:03:29.697787] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.258 [2024-04-26 15:03:29.697859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.258 [2024-04-26 15:03:29.697884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.258 [2024-04-26 15:03:29.697891] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.258 [2024-04-26 15:03:29.697897] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.258 [2024-04-26 15:03:29.697913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-04-26 15:03:29.707816] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.258 [2024-04-26 15:03:29.707883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.258 [2024-04-26 15:03:29.707902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.258 [2024-04-26 15:03:29.707909] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.258 [2024-04-26 15:03:29.707915] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.258 [2024-04-26 15:03:29.707930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-04-26 15:03:29.717781] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.258 [2024-04-26 15:03:29.717858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.258 [2024-04-26 15:03:29.717877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.258 [2024-04-26 15:03:29.717885] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.258 [2024-04-26 15:03:29.717891] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.258 [2024-04-26 15:03:29.717906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-04-26 15:03:29.727864] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.258 [2024-04-26 15:03:29.727922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.258 [2024-04-26 15:03:29.727940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.258 [2024-04-26 15:03:29.727948] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.258 [2024-04-26 15:03:29.727954] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.258 [2024-04-26 15:03:29.727969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-04-26 15:03:29.737959] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.258 [2024-04-26 15:03:29.738046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.258 [2024-04-26 15:03:29.738065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.258 [2024-04-26 15:03:29.738072] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.258 [2024-04-26 15:03:29.738079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.258 [2024-04-26 15:03:29.738100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-04-26 15:03:29.747915] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.258 [2024-04-26 15:03:29.747979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.258 [2024-04-26 15:03:29.747999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.258 [2024-04-26 15:03:29.748007] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.258 [2024-04-26 15:03:29.748013] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.258 [2024-04-26 15:03:29.748030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-04-26 15:03:29.757892] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.258 [2024-04-26 15:03:29.757983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.258 [2024-04-26 15:03:29.758002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.258 [2024-04-26 15:03:29.758009] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.258 [2024-04-26 15:03:29.758016] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.258 [2024-04-26 15:03:29.758032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-04-26 15:03:29.768006] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.258 [2024-04-26 15:03:29.768080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.258 [2024-04-26 15:03:29.768098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.258 [2024-04-26 15:03:29.768105] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.258 [2024-04-26 15:03:29.768112] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.258 [2024-04-26 15:03:29.768128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-04-26 15:03:29.778035] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.258 [2024-04-26 15:03:29.778116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.258 [2024-04-26 15:03:29.778136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.258 [2024-04-26 15:03:29.778144] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.258 [2024-04-26 15:03:29.778150] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.258 [2024-04-26 15:03:29.778167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-04-26 15:03:29.787952] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.258 [2024-04-26 15:03:29.788028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.258 [2024-04-26 15:03:29.788048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.258 [2024-04-26 15:03:29.788055] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.258 [2024-04-26 15:03:29.788061] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.258 [2024-04-26 15:03:29.788078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.258 qpair failed and we were unable to recover it. 00:26:47.258 [2024-04-26 15:03:29.798086] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.259 [2024-04-26 15:03:29.798160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.259 [2024-04-26 15:03:29.798178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.259 [2024-04-26 15:03:29.798185] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.259 [2024-04-26 15:03:29.798192] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.259 [2024-04-26 15:03:29.798208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-04-26 15:03:29.808155] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.259 [2024-04-26 15:03:29.808226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.259 [2024-04-26 15:03:29.808247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.259 [2024-04-26 15:03:29.808254] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.259 [2024-04-26 15:03:29.808260] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.259 [2024-04-26 15:03:29.808279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-04-26 15:03:29.818189] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.259 [2024-04-26 15:03:29.818249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.259 [2024-04-26 15:03:29.818268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.259 [2024-04-26 15:03:29.818276] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.259 [2024-04-26 15:03:29.818282] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.259 [2024-04-26 15:03:29.818298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-04-26 15:03:29.828163] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.259 [2024-04-26 15:03:29.828231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.259 [2024-04-26 15:03:29.828251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.259 [2024-04-26 15:03:29.828260] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.259 [2024-04-26 15:03:29.828273] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.259 [2024-04-26 15:03:29.828288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-04-26 15:03:29.838042] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.259 [2024-04-26 15:03:29.838111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.259 [2024-04-26 15:03:29.838129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.259 [2024-04-26 15:03:29.838136] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.259 [2024-04-26 15:03:29.838142] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.259 [2024-04-26 15:03:29.838157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-04-26 15:03:29.848215] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.259 [2024-04-26 15:03:29.848283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.259 [2024-04-26 15:03:29.848302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.259 [2024-04-26 15:03:29.848309] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.259 [2024-04-26 15:03:29.848315] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.259 [2024-04-26 15:03:29.848331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-04-26 15:03:29.858265] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.259 [2024-04-26 15:03:29.858338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.259 [2024-04-26 15:03:29.858356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.259 [2024-04-26 15:03:29.858363] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.259 [2024-04-26 15:03:29.858370] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.259 [2024-04-26 15:03:29.858385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-04-26 15:03:29.868283] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.259 [2024-04-26 15:03:29.868358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.259 [2024-04-26 15:03:29.868377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.259 [2024-04-26 15:03:29.868384] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.259 [2024-04-26 15:03:29.868395] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.259 [2024-04-26 15:03:29.868411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-04-26 15:03:29.878277] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.259 [2024-04-26 15:03:29.878350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.259 [2024-04-26 15:03:29.878368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.259 [2024-04-26 15:03:29.878375] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.259 [2024-04-26 15:03:29.878381] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.259 [2024-04-26 15:03:29.878395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-04-26 15:03:29.888340] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.259 [2024-04-26 15:03:29.888402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.259 [2024-04-26 15:03:29.888419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.259 [2024-04-26 15:03:29.888426] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.259 [2024-04-26 15:03:29.888432] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.259 [2024-04-26 15:03:29.888447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-04-26 15:03:29.898369] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.259 [2024-04-26 15:03:29.898435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.259 [2024-04-26 15:03:29.898451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.259 [2024-04-26 15:03:29.898458] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.259 [2024-04-26 15:03:29.898464] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.259 [2024-04-26 15:03:29.898480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-04-26 15:03:29.908420] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.259 [2024-04-26 15:03:29.908482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.259 [2024-04-26 15:03:29.908499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.259 [2024-04-26 15:03:29.908506] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.259 [2024-04-26 15:03:29.908512] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.259 [2024-04-26 15:03:29.908527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.259 [2024-04-26 15:03:29.918383] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.259 [2024-04-26 15:03:29.918483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.259 [2024-04-26 15:03:29.918500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.259 [2024-04-26 15:03:29.918511] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.259 [2024-04-26 15:03:29.918517] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.259 [2024-04-26 15:03:29.918532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.259 qpair failed and we were unable to recover it. 00:26:47.522 [2024-04-26 15:03:29.928460] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.522 [2024-04-26 15:03:29.928520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.522 [2024-04-26 15:03:29.928535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.522 [2024-04-26 15:03:29.928542] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.522 [2024-04-26 15:03:29.928549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.522 [2024-04-26 15:03:29.928563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-04-26 15:03:29.938487] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.522 [2024-04-26 15:03:29.938543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.522 [2024-04-26 15:03:29.938558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.522 [2024-04-26 15:03:29.938565] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.522 [2024-04-26 15:03:29.938571] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.522 [2024-04-26 15:03:29.938586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-04-26 15:03:29.948542] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.522 [2024-04-26 15:03:29.948614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.522 [2024-04-26 15:03:29.948631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.522 [2024-04-26 15:03:29.948638] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.522 [2024-04-26 15:03:29.948644] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.522 [2024-04-26 15:03:29.948659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-04-26 15:03:29.958479] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.522 [2024-04-26 15:03:29.958550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.522 [2024-04-26 15:03:29.958577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.522 [2024-04-26 15:03:29.958585] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.522 [2024-04-26 15:03:29.958592] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.522 [2024-04-26 15:03:29.958612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-04-26 15:03:29.968435] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.523 [2024-04-26 15:03:29.968497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.523 [2024-04-26 15:03:29.968523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.523 [2024-04-26 15:03:29.968532] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.523 [2024-04-26 15:03:29.968538] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.523 [2024-04-26 15:03:29.968557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-04-26 15:03:29.978539] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.523 [2024-04-26 15:03:29.978598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.523 [2024-04-26 15:03:29.978625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.523 [2024-04-26 15:03:29.978633] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.523 [2024-04-26 15:03:29.978640] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.523 [2024-04-26 15:03:29.978659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-04-26 15:03:29.988495] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.523 [2024-04-26 15:03:29.988564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.523 [2024-04-26 15:03:29.988580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.523 [2024-04-26 15:03:29.988587] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.523 [2024-04-26 15:03:29.988593] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.523 [2024-04-26 15:03:29.988609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-04-26 15:03:29.998482] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.523 [2024-04-26 15:03:29.998541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.523 [2024-04-26 15:03:29.998557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.523 [2024-04-26 15:03:29.998564] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.523 [2024-04-26 15:03:29.998570] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.523 [2024-04-26 15:03:29.998586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-04-26 15:03:30.008607] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.523 [2024-04-26 15:03:30.008666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.523 [2024-04-26 15:03:30.008687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.523 [2024-04-26 15:03:30.008695] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.523 [2024-04-26 15:03:30.008702] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.523 [2024-04-26 15:03:30.008718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-04-26 15:03:30.018531] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.523 [2024-04-26 15:03:30.018579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.523 [2024-04-26 15:03:30.018595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.523 [2024-04-26 15:03:30.018602] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.523 [2024-04-26 15:03:30.018609] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.523 [2024-04-26 15:03:30.018624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-04-26 15:03:30.028728] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.523 [2024-04-26 15:03:30.028787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.523 [2024-04-26 15:03:30.028802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.523 [2024-04-26 15:03:30.028809] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.523 [2024-04-26 15:03:30.028815] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.523 [2024-04-26 15:03:30.028830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-04-26 15:03:30.038718] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.523 [2024-04-26 15:03:30.038817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.523 [2024-04-26 15:03:30.038832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.523 [2024-04-26 15:03:30.038844] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.523 [2024-04-26 15:03:30.038850] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.523 [2024-04-26 15:03:30.038865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-04-26 15:03:30.048727] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.523 [2024-04-26 15:03:30.048784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.523 [2024-04-26 15:03:30.048799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.523 [2024-04-26 15:03:30.048806] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.523 [2024-04-26 15:03:30.048812] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.523 [2024-04-26 15:03:30.048830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-04-26 15:03:30.058834] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.523 [2024-04-26 15:03:30.058901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.523 [2024-04-26 15:03:30.058915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.523 [2024-04-26 15:03:30.058922] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.523 [2024-04-26 15:03:30.058928] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.523 [2024-04-26 15:03:30.058942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-04-26 15:03:30.068860] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.523 [2024-04-26 15:03:30.068919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.523 [2024-04-26 15:03:30.068933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.523 [2024-04-26 15:03:30.068940] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.523 [2024-04-26 15:03:30.068946] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.523 [2024-04-26 15:03:30.068960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-04-26 15:03:30.078738] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.523 [2024-04-26 15:03:30.078800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.523 [2024-04-26 15:03:30.078814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.523 [2024-04-26 15:03:30.078821] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.523 [2024-04-26 15:03:30.078827] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.523 [2024-04-26 15:03:30.078846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-04-26 15:03:30.088876] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.523 [2024-04-26 15:03:30.088965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.523 [2024-04-26 15:03:30.088979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.523 [2024-04-26 15:03:30.088986] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.524 [2024-04-26 15:03:30.088992] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.524 [2024-04-26 15:03:30.089006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-04-26 15:03:30.098869] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.524 [2024-04-26 15:03:30.098935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.524 [2024-04-26 15:03:30.098955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.524 [2024-04-26 15:03:30.098962] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.524 [2024-04-26 15:03:30.098968] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.524 [2024-04-26 15:03:30.098982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-04-26 15:03:30.108808] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.524 [2024-04-26 15:03:30.108863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.524 [2024-04-26 15:03:30.108877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.524 [2024-04-26 15:03:30.108884] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.524 [2024-04-26 15:03:30.108890] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.524 [2024-04-26 15:03:30.108904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-04-26 15:03:30.118924] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.524 [2024-04-26 15:03:30.118981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.524 [2024-04-26 15:03:30.118995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.524 [2024-04-26 15:03:30.119002] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.524 [2024-04-26 15:03:30.119008] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.524 [2024-04-26 15:03:30.119022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-04-26 15:03:30.128934] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.524 [2024-04-26 15:03:30.128985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.524 [2024-04-26 15:03:30.128998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.524 [2024-04-26 15:03:30.129006] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.524 [2024-04-26 15:03:30.129012] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.524 [2024-04-26 15:03:30.129026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-04-26 15:03:30.138987] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.524 [2024-04-26 15:03:30.139045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.524 [2024-04-26 15:03:30.139059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.524 [2024-04-26 15:03:30.139066] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.524 [2024-04-26 15:03:30.139072] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.524 [2024-04-26 15:03:30.139090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-04-26 15:03:30.148973] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.524 [2024-04-26 15:03:30.149022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.524 [2024-04-26 15:03:30.149036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.524 [2024-04-26 15:03:30.149043] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.524 [2024-04-26 15:03:30.149049] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.524 [2024-04-26 15:03:30.149063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-04-26 15:03:30.159044] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.524 [2024-04-26 15:03:30.159136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.524 [2024-04-26 15:03:30.159149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.524 [2024-04-26 15:03:30.159156] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.524 [2024-04-26 15:03:30.159162] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.524 [2024-04-26 15:03:30.159176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-04-26 15:03:30.169038] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.524 [2024-04-26 15:03:30.169081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.524 [2024-04-26 15:03:30.169095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.524 [2024-04-26 15:03:30.169102] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.524 [2024-04-26 15:03:30.169108] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.524 [2024-04-26 15:03:30.169121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-04-26 15:03:30.179088] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.524 [2024-04-26 15:03:30.179133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.524 [2024-04-26 15:03:30.179146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.524 [2024-04-26 15:03:30.179153] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.524 [2024-04-26 15:03:30.179159] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.524 [2024-04-26 15:03:30.179173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.787 [2024-04-26 15:03:30.189119] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.787 [2024-04-26 15:03:30.189205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.787 [2024-04-26 15:03:30.189222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.787 [2024-04-26 15:03:30.189229] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.787 [2024-04-26 15:03:30.189235] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.787 [2024-04-26 15:03:30.189248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.787 qpair failed and we were unable to recover it. 00:26:47.787 [2024-04-26 15:03:30.199120] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.787 [2024-04-26 15:03:30.199169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.787 [2024-04-26 15:03:30.199182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.787 [2024-04-26 15:03:30.199188] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.787 [2024-04-26 15:03:30.199194] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.787 [2024-04-26 15:03:30.199208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.787 qpair failed and we were unable to recover it. 00:26:47.787 [2024-04-26 15:03:30.209148] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.787 [2024-04-26 15:03:30.209198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.787 [2024-04-26 15:03:30.209212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.787 [2024-04-26 15:03:30.209218] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.787 [2024-04-26 15:03:30.209225] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.787 [2024-04-26 15:03:30.209238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.787 qpair failed and we were unable to recover it. 00:26:47.787 [2024-04-26 15:03:30.219204] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.787 [2024-04-26 15:03:30.219255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.787 [2024-04-26 15:03:30.219269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.787 [2024-04-26 15:03:30.219276] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.787 [2024-04-26 15:03:30.219282] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.787 [2024-04-26 15:03:30.219296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.787 qpair failed and we were unable to recover it. 00:26:47.787 [2024-04-26 15:03:30.229269] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.787 [2024-04-26 15:03:30.229339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.787 [2024-04-26 15:03:30.229352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.787 [2024-04-26 15:03:30.229359] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.787 [2024-04-26 15:03:30.229369] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.787 [2024-04-26 15:03:30.229383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.787 qpair failed and we were unable to recover it. 00:26:47.787 [2024-04-26 15:03:30.239165] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.787 [2024-04-26 15:03:30.239218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.787 [2024-04-26 15:03:30.239232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.787 [2024-04-26 15:03:30.239238] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.787 [2024-04-26 15:03:30.239244] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.787 [2024-04-26 15:03:30.239258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.787 qpair failed and we were unable to recover it. 00:26:47.787 [2024-04-26 15:03:30.249146] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.787 [2024-04-26 15:03:30.249195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.787 [2024-04-26 15:03:30.249209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.787 [2024-04-26 15:03:30.249216] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.787 [2024-04-26 15:03:30.249222] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.787 [2024-04-26 15:03:30.249241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.787 qpair failed and we were unable to recover it. 00:26:47.787 [2024-04-26 15:03:30.259293] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.787 [2024-04-26 15:03:30.259344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.787 [2024-04-26 15:03:30.259359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.787 [2024-04-26 15:03:30.259366] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.787 [2024-04-26 15:03:30.259373] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.787 [2024-04-26 15:03:30.259391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.787 qpair failed and we were unable to recover it. 00:26:47.787 [2024-04-26 15:03:30.269301] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.787 [2024-04-26 15:03:30.269350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.788 [2024-04-26 15:03:30.269364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.788 [2024-04-26 15:03:30.269371] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.788 [2024-04-26 15:03:30.269377] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.788 [2024-04-26 15:03:30.269391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.788 qpair failed and we were unable to recover it. 00:26:47.788 [2024-04-26 15:03:30.279349] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.788 [2024-04-26 15:03:30.279406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.788 [2024-04-26 15:03:30.279420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.788 [2024-04-26 15:03:30.279426] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.788 [2024-04-26 15:03:30.279432] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.788 [2024-04-26 15:03:30.279446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.788 qpair failed and we were unable to recover it. 00:26:47.788 [2024-04-26 15:03:30.289371] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.788 [2024-04-26 15:03:30.289465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.788 [2024-04-26 15:03:30.289479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.788 [2024-04-26 15:03:30.289486] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.788 [2024-04-26 15:03:30.289492] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.788 [2024-04-26 15:03:30.289506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.788 qpair failed and we were unable to recover it. 00:26:47.788 [2024-04-26 15:03:30.299388] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.788 [2024-04-26 15:03:30.299435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.788 [2024-04-26 15:03:30.299449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.788 [2024-04-26 15:03:30.299455] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.788 [2024-04-26 15:03:30.299461] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.788 [2024-04-26 15:03:30.299475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.788 qpair failed and we were unable to recover it. 00:26:47.788 [2024-04-26 15:03:30.309424] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.788 [2024-04-26 15:03:30.309509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.788 [2024-04-26 15:03:30.309522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.788 [2024-04-26 15:03:30.309529] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.788 [2024-04-26 15:03:30.309535] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.788 [2024-04-26 15:03:30.309549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.788 qpair failed and we were unable to recover it. 00:26:47.788 [2024-04-26 15:03:30.319330] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.788 [2024-04-26 15:03:30.319384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.788 [2024-04-26 15:03:30.319397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.788 [2024-04-26 15:03:30.319408] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.788 [2024-04-26 15:03:30.319414] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.788 [2024-04-26 15:03:30.319427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.788 qpair failed and we were unable to recover it. 00:26:47.788 [2024-04-26 15:03:30.329355] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.788 [2024-04-26 15:03:30.329404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.788 [2024-04-26 15:03:30.329418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.788 [2024-04-26 15:03:30.329425] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.788 [2024-04-26 15:03:30.329431] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.788 [2024-04-26 15:03:30.329444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.788 qpair failed and we were unable to recover it. 00:26:47.788 [2024-04-26 15:03:30.339521] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.788 [2024-04-26 15:03:30.339608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.788 [2024-04-26 15:03:30.339622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.788 [2024-04-26 15:03:30.339628] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.788 [2024-04-26 15:03:30.339634] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.788 [2024-04-26 15:03:30.339648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.788 qpair failed and we were unable to recover it. 00:26:47.788 [2024-04-26 15:03:30.349527] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.788 [2024-04-26 15:03:30.349608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.788 [2024-04-26 15:03:30.349623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.788 [2024-04-26 15:03:30.349630] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.788 [2024-04-26 15:03:30.349636] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.788 [2024-04-26 15:03:30.349650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.788 qpair failed and we were unable to recover it. 00:26:47.788 [2024-04-26 15:03:30.359565] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.788 [2024-04-26 15:03:30.359619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.788 [2024-04-26 15:03:30.359643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.788 [2024-04-26 15:03:30.359651] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.788 [2024-04-26 15:03:30.359658] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.788 [2024-04-26 15:03:30.359677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.788 qpair failed and we were unable to recover it. 00:26:47.788 [2024-04-26 15:03:30.369579] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.788 [2024-04-26 15:03:30.369638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.788 [2024-04-26 15:03:30.369662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.788 [2024-04-26 15:03:30.369670] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.788 [2024-04-26 15:03:30.369677] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.788 [2024-04-26 15:03:30.369695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.788 qpair failed and we were unable to recover it. 00:26:47.788 [2024-04-26 15:03:30.379641] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.788 [2024-04-26 15:03:30.379695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.788 [2024-04-26 15:03:30.379719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.788 [2024-04-26 15:03:30.379727] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.788 [2024-04-26 15:03:30.379733] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.788 [2024-04-26 15:03:30.379751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.788 qpair failed and we were unable to recover it. 00:26:47.788 [2024-04-26 15:03:30.389639] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.788 [2024-04-26 15:03:30.389691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.788 [2024-04-26 15:03:30.389706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.788 [2024-04-26 15:03:30.389713] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.788 [2024-04-26 15:03:30.389719] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.788 [2024-04-26 15:03:30.389734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.788 qpair failed and we were unable to recover it. 00:26:47.788 [2024-04-26 15:03:30.399683] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.788 [2024-04-26 15:03:30.399740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.788 [2024-04-26 15:03:30.399754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.788 [2024-04-26 15:03:30.399761] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.788 [2024-04-26 15:03:30.399767] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.788 [2024-04-26 15:03:30.399781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.788 qpair failed and we were unable to recover it. 00:26:47.788 [2024-04-26 15:03:30.409578] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.788 [2024-04-26 15:03:30.409626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.788 [2024-04-26 15:03:30.409640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.788 [2024-04-26 15:03:30.409651] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.788 [2024-04-26 15:03:30.409657] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.789 [2024-04-26 15:03:30.409671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.789 qpair failed and we were unable to recover it. 00:26:47.789 [2024-04-26 15:03:30.419681] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.789 [2024-04-26 15:03:30.419731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.789 [2024-04-26 15:03:30.419745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.789 [2024-04-26 15:03:30.419751] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.789 [2024-04-26 15:03:30.419758] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.789 [2024-04-26 15:03:30.419771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.789 qpair failed and we were unable to recover it. 00:26:47.789 [2024-04-26 15:03:30.429769] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.789 [2024-04-26 15:03:30.429818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.789 [2024-04-26 15:03:30.429832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.789 [2024-04-26 15:03:30.429843] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.789 [2024-04-26 15:03:30.429850] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.789 [2024-04-26 15:03:30.429865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.789 qpair failed and we were unable to recover it. 00:26:47.789 [2024-04-26 15:03:30.439783] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.789 [2024-04-26 15:03:30.439842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.789 [2024-04-26 15:03:30.439856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.789 [2024-04-26 15:03:30.439863] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.789 [2024-04-26 15:03:30.439869] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.789 [2024-04-26 15:03:30.439883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.789 qpair failed and we were unable to recover it. 00:26:47.789 [2024-04-26 15:03:30.449827] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.789 [2024-04-26 15:03:30.449882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.789 [2024-04-26 15:03:30.449897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.789 [2024-04-26 15:03:30.449904] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.789 [2024-04-26 15:03:30.449910] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:47.789 [2024-04-26 15:03:30.449924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.789 qpair failed and we were unable to recover it. 00:26:48.051 [2024-04-26 15:03:30.459803] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.051 [2024-04-26 15:03:30.459861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.051 [2024-04-26 15:03:30.459876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.051 [2024-04-26 15:03:30.459882] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.052 [2024-04-26 15:03:30.459889] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.052 [2024-04-26 15:03:30.459903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.052 qpair failed and we were unable to recover it. 00:26:48.052 [2024-04-26 15:03:30.469880] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.052 [2024-04-26 15:03:30.469964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.052 [2024-04-26 15:03:30.469978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.052 [2024-04-26 15:03:30.469985] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.052 [2024-04-26 15:03:30.469991] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.052 [2024-04-26 15:03:30.470006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.052 qpair failed and we were unable to recover it. 00:26:48.052 [2024-04-26 15:03:30.479766] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.052 [2024-04-26 15:03:30.479824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.052 [2024-04-26 15:03:30.479843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.052 [2024-04-26 15:03:30.479850] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.052 [2024-04-26 15:03:30.479857] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.052 [2024-04-26 15:03:30.479870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.052 qpair failed and we were unable to recover it. 00:26:48.052 [2024-04-26 15:03:30.489899] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.052 [2024-04-26 15:03:30.489945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.052 [2024-04-26 15:03:30.489959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.052 [2024-04-26 15:03:30.489966] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.052 [2024-04-26 15:03:30.489973] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.052 [2024-04-26 15:03:30.489986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.052 qpair failed and we were unable to recover it. 00:26:48.052 [2024-04-26 15:03:30.499957] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.052 [2024-04-26 15:03:30.500046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.052 [2024-04-26 15:03:30.500063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.052 [2024-04-26 15:03:30.500070] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.052 [2024-04-26 15:03:30.500076] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.052 [2024-04-26 15:03:30.500090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.052 qpair failed and we were unable to recover it. 00:26:48.052 [2024-04-26 15:03:30.509962] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.052 [2024-04-26 15:03:30.510007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.052 [2024-04-26 15:03:30.510021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.052 [2024-04-26 15:03:30.510028] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.052 [2024-04-26 15:03:30.510034] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.052 [2024-04-26 15:03:30.510047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.052 qpair failed and we were unable to recover it. 00:26:48.052 [2024-04-26 15:03:30.519986] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.052 [2024-04-26 15:03:30.520046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.052 [2024-04-26 15:03:30.520060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.052 [2024-04-26 15:03:30.520067] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.052 [2024-04-26 15:03:30.520073] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.052 [2024-04-26 15:03:30.520087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.052 qpair failed and we were unable to recover it. 00:26:48.052 [2024-04-26 15:03:30.530018] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.052 [2024-04-26 15:03:30.530063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.052 [2024-04-26 15:03:30.530077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.052 [2024-04-26 15:03:30.530084] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.052 [2024-04-26 15:03:30.530090] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.052 [2024-04-26 15:03:30.530104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.052 qpair failed and we were unable to recover it. 00:26:48.052 [2024-04-26 15:03:30.540071] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.052 [2024-04-26 15:03:30.540163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.052 [2024-04-26 15:03:30.540176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.052 [2024-04-26 15:03:30.540183] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.052 [2024-04-26 15:03:30.540189] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.052 [2024-04-26 15:03:30.540216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.052 qpair failed and we were unable to recover it. 00:26:48.052 [2024-04-26 15:03:30.549971] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.052 [2024-04-26 15:03:30.550022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.052 [2024-04-26 15:03:30.550038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.052 [2024-04-26 15:03:30.550045] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.052 [2024-04-26 15:03:30.550051] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.052 [2024-04-26 15:03:30.550071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.052 qpair failed and we were unable to recover it. 00:26:48.052 [2024-04-26 15:03:30.560078] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.052 [2024-04-26 15:03:30.560133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.052 [2024-04-26 15:03:30.560147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.052 [2024-04-26 15:03:30.560154] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.052 [2024-04-26 15:03:30.560160] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.052 [2024-04-26 15:03:30.560174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.052 qpair failed and we were unable to recover it. 00:26:48.052 [2024-04-26 15:03:30.570133] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.052 [2024-04-26 15:03:30.570187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.052 [2024-04-26 15:03:30.570201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.052 [2024-04-26 15:03:30.570207] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.052 [2024-04-26 15:03:30.570213] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.052 [2024-04-26 15:03:30.570226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.052 qpair failed and we were unable to recover it. 00:26:48.052 [2024-04-26 15:03:30.580158] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.052 [2024-04-26 15:03:30.580217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.052 [2024-04-26 15:03:30.580231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.052 [2024-04-26 15:03:30.580238] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.052 [2024-04-26 15:03:30.580244] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.052 [2024-04-26 15:03:30.580259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.052 qpair failed and we were unable to recover it. 00:26:48.052 [2024-04-26 15:03:30.590187] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.052 [2024-04-26 15:03:30.590233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.052 [2024-04-26 15:03:30.590250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.052 [2024-04-26 15:03:30.590256] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.052 [2024-04-26 15:03:30.590263] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.052 [2024-04-26 15:03:30.590276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.052 qpair failed and we were unable to recover it. 00:26:48.052 [2024-04-26 15:03:30.600104] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.053 [2024-04-26 15:03:30.600158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.053 [2024-04-26 15:03:30.600171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.053 [2024-04-26 15:03:30.600178] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.053 [2024-04-26 15:03:30.600184] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.053 [2024-04-26 15:03:30.600203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.053 qpair failed and we were unable to recover it. 00:26:48.053 [2024-04-26 15:03:30.610264] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.053 [2024-04-26 15:03:30.610313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.053 [2024-04-26 15:03:30.610327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.053 [2024-04-26 15:03:30.610334] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.053 [2024-04-26 15:03:30.610340] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.053 [2024-04-26 15:03:30.610353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.053 qpair failed and we were unable to recover it. 00:26:48.053 [2024-04-26 15:03:30.620181] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.053 [2024-04-26 15:03:30.620274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.053 [2024-04-26 15:03:30.620288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.053 [2024-04-26 15:03:30.620295] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.053 [2024-04-26 15:03:30.620301] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.053 [2024-04-26 15:03:30.620314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.053 qpair failed and we were unable to recover it. 00:26:48.053 [2024-04-26 15:03:30.630280] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.053 [2024-04-26 15:03:30.630329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.053 [2024-04-26 15:03:30.630343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.053 [2024-04-26 15:03:30.630350] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.053 [2024-04-26 15:03:30.630359] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.053 [2024-04-26 15:03:30.630373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.053 qpair failed and we were unable to recover it. 00:26:48.053 [2024-04-26 15:03:30.640378] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.053 [2024-04-26 15:03:30.640460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.053 [2024-04-26 15:03:30.640473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.053 [2024-04-26 15:03:30.640480] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.053 [2024-04-26 15:03:30.640486] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.053 [2024-04-26 15:03:30.640499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.053 qpair failed and we were unable to recover it. 00:26:48.053 [2024-04-26 15:03:30.650318] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.053 [2024-04-26 15:03:30.650367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.053 [2024-04-26 15:03:30.650380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.053 [2024-04-26 15:03:30.650387] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.053 [2024-04-26 15:03:30.650393] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.053 [2024-04-26 15:03:30.650407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.053 qpair failed and we were unable to recover it. 00:26:48.053 [2024-04-26 15:03:30.660394] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.053 [2024-04-26 15:03:30.660441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.053 [2024-04-26 15:03:30.660455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.053 [2024-04-26 15:03:30.660461] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.053 [2024-04-26 15:03:30.660467] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.053 [2024-04-26 15:03:30.660481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.053 qpair failed and we were unable to recover it. 00:26:48.053 [2024-04-26 15:03:30.670425] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.053 [2024-04-26 15:03:30.670472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.053 [2024-04-26 15:03:30.670485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.053 [2024-04-26 15:03:30.670492] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.053 [2024-04-26 15:03:30.670498] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.053 [2024-04-26 15:03:30.670512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.053 qpair failed and we were unable to recover it. 00:26:48.053 [2024-04-26 15:03:30.680484] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.053 [2024-04-26 15:03:30.680541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.053 [2024-04-26 15:03:30.680554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.053 [2024-04-26 15:03:30.680561] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.053 [2024-04-26 15:03:30.680567] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.053 [2024-04-26 15:03:30.680581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.053 qpair failed and we were unable to recover it. 00:26:48.053 [2024-04-26 15:03:30.690428] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.053 [2024-04-26 15:03:30.690508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.053 [2024-04-26 15:03:30.690522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.053 [2024-04-26 15:03:30.690529] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.053 [2024-04-26 15:03:30.690535] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.053 [2024-04-26 15:03:30.690548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.053 qpair failed and we were unable to recover it. 00:26:48.053 [2024-04-26 15:03:30.700371] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.053 [2024-04-26 15:03:30.700417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.053 [2024-04-26 15:03:30.700431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.053 [2024-04-26 15:03:30.700438] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.053 [2024-04-26 15:03:30.700444] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.053 [2024-04-26 15:03:30.700457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.053 qpair failed and we were unable to recover it. 00:26:48.053 [2024-04-26 15:03:30.710391] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.053 [2024-04-26 15:03:30.710442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.053 [2024-04-26 15:03:30.710455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.053 [2024-04-26 15:03:30.710462] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.053 [2024-04-26 15:03:30.710468] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.053 [2024-04-26 15:03:30.710481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.053 qpair failed and we were unable to recover it. 00:26:48.316 [2024-04-26 15:03:30.720431] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.316 [2024-04-26 15:03:30.720486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.316 [2024-04-26 15:03:30.720501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.316 [2024-04-26 15:03:30.720511] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.316 [2024-04-26 15:03:30.720518] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.316 [2024-04-26 15:03:30.720531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.316 qpair failed and we were unable to recover it. 00:26:48.316 [2024-04-26 15:03:30.730451] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.316 [2024-04-26 15:03:30.730497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.316 [2024-04-26 15:03:30.730511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.316 [2024-04-26 15:03:30.730517] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.316 [2024-04-26 15:03:30.730523] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.316 [2024-04-26 15:03:30.730537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.316 qpair failed and we were unable to recover it. 00:26:48.316 [2024-04-26 15:03:30.740604] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.316 [2024-04-26 15:03:30.740658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.316 [2024-04-26 15:03:30.740672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.316 [2024-04-26 15:03:30.740679] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.316 [2024-04-26 15:03:30.740685] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.316 [2024-04-26 15:03:30.740699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.316 qpair failed and we were unable to recover it. 00:26:48.316 [2024-04-26 15:03:30.750599] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.316 [2024-04-26 15:03:30.750687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.316 [2024-04-26 15:03:30.750701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.316 [2024-04-26 15:03:30.750708] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.316 [2024-04-26 15:03:30.750714] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.316 [2024-04-26 15:03:30.750728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.316 qpair failed and we were unable to recover it. 00:26:48.316 [2024-04-26 15:03:30.760671] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.316 [2024-04-26 15:03:30.760768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.316 [2024-04-26 15:03:30.760782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.316 [2024-04-26 15:03:30.760789] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.316 [2024-04-26 15:03:30.760795] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.316 [2024-04-26 15:03:30.760808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.316 qpair failed and we were unable to recover it. 00:26:48.316 [2024-04-26 15:03:30.770691] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.316 [2024-04-26 15:03:30.770736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.316 [2024-04-26 15:03:30.770750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.316 [2024-04-26 15:03:30.770757] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.316 [2024-04-26 15:03:30.770763] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.316 [2024-04-26 15:03:30.770777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.316 qpair failed and we were unable to recover it. 00:26:48.316 [2024-04-26 15:03:30.780724] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.316 [2024-04-26 15:03:30.780813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.316 [2024-04-26 15:03:30.780826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.316 [2024-04-26 15:03:30.780833] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.316 [2024-04-26 15:03:30.780844] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.316 [2024-04-26 15:03:30.780857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.316 qpair failed and we were unable to recover it. 00:26:48.316 [2024-04-26 15:03:30.790657] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.316 [2024-04-26 15:03:30.790729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.316 [2024-04-26 15:03:30.790743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.316 [2024-04-26 15:03:30.790749] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.316 [2024-04-26 15:03:30.790755] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.316 [2024-04-26 15:03:30.790768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.316 qpair failed and we were unable to recover it. 00:26:48.316 [2024-04-26 15:03:30.800795] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.316 [2024-04-26 15:03:30.800856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.316 [2024-04-26 15:03:30.800869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.316 [2024-04-26 15:03:30.800876] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.316 [2024-04-26 15:03:30.800882] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.316 [2024-04-26 15:03:30.800895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.316 qpair failed and we were unable to recover it. 00:26:48.316 [2024-04-26 15:03:30.810800] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.316 [2024-04-26 15:03:30.810867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.316 [2024-04-26 15:03:30.810883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.316 [2024-04-26 15:03:30.810900] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.316 [2024-04-26 15:03:30.810906] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.316 [2024-04-26 15:03:30.810921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.316 qpair failed and we were unable to recover it. 00:26:48.316 [2024-04-26 15:03:30.820829] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.316 [2024-04-26 15:03:30.820879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.316 [2024-04-26 15:03:30.820894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.316 [2024-04-26 15:03:30.820901] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.317 [2024-04-26 15:03:30.820907] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.317 [2024-04-26 15:03:30.820920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.317 qpair failed and we were unable to recover it. 00:26:48.317 [2024-04-26 15:03:30.830861] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.317 [2024-04-26 15:03:30.830921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.317 [2024-04-26 15:03:30.830936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.317 [2024-04-26 15:03:30.830942] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.317 [2024-04-26 15:03:30.830949] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.317 [2024-04-26 15:03:30.830962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.317 qpair failed and we were unable to recover it. 00:26:48.317 [2024-04-26 15:03:30.840933] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.317 [2024-04-26 15:03:30.841024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.317 [2024-04-26 15:03:30.841038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.317 [2024-04-26 15:03:30.841045] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.317 [2024-04-26 15:03:30.841051] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.317 [2024-04-26 15:03:30.841065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.317 qpair failed and we were unable to recover it. 00:26:48.317 [2024-04-26 15:03:30.850908] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.317 [2024-04-26 15:03:30.850963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.317 [2024-04-26 15:03:30.850978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.317 [2024-04-26 15:03:30.850985] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.317 [2024-04-26 15:03:30.850990] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.317 [2024-04-26 15:03:30.851005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.317 qpair failed and we were unable to recover it. 00:26:48.317 [2024-04-26 15:03:30.860831] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.317 [2024-04-26 15:03:30.860886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.317 [2024-04-26 15:03:30.860900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.317 [2024-04-26 15:03:30.860907] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.317 [2024-04-26 15:03:30.860913] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.317 [2024-04-26 15:03:30.860927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.317 qpair failed and we were unable to recover it. 00:26:48.317 [2024-04-26 15:03:30.870975] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.317 [2024-04-26 15:03:30.871028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.317 [2024-04-26 15:03:30.871042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.317 [2024-04-26 15:03:30.871049] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.317 [2024-04-26 15:03:30.871055] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.317 [2024-04-26 15:03:30.871068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.317 qpair failed and we were unable to recover it. 00:26:48.317 [2024-04-26 15:03:30.880993] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.317 [2024-04-26 15:03:30.881056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.317 [2024-04-26 15:03:30.881071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.317 [2024-04-26 15:03:30.881081] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.317 [2024-04-26 15:03:30.881088] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.317 [2024-04-26 15:03:30.881102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.317 qpair failed and we were unable to recover it. 00:26:48.317 [2024-04-26 15:03:30.891012] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.317 [2024-04-26 15:03:30.891068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.317 [2024-04-26 15:03:30.891083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.317 [2024-04-26 15:03:30.891089] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.317 [2024-04-26 15:03:30.891095] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.317 [2024-04-26 15:03:30.891110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.317 qpair failed and we were unable to recover it. 00:26:48.317 [2024-04-26 15:03:30.901050] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.317 [2024-04-26 15:03:30.901103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.317 [2024-04-26 15:03:30.901120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.317 [2024-04-26 15:03:30.901127] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.317 [2024-04-26 15:03:30.901133] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.317 [2024-04-26 15:03:30.901147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.317 qpair failed and we were unable to recover it. 00:26:48.317 [2024-04-26 15:03:30.911072] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.317 [2024-04-26 15:03:30.911122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.317 [2024-04-26 15:03:30.911135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.317 [2024-04-26 15:03:30.911142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.317 [2024-04-26 15:03:30.911148] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.317 [2024-04-26 15:03:30.911162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.317 qpair failed and we were unable to recover it. 00:26:48.317 [2024-04-26 15:03:30.921077] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.317 [2024-04-26 15:03:30.921126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.317 [2024-04-26 15:03:30.921139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.317 [2024-04-26 15:03:30.921146] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.317 [2024-04-26 15:03:30.921152] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.317 [2024-04-26 15:03:30.921166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.317 qpair failed and we were unable to recover it. 00:26:48.317 [2024-04-26 15:03:30.930996] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.317 [2024-04-26 15:03:30.931044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.317 [2024-04-26 15:03:30.931057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.317 [2024-04-26 15:03:30.931064] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.317 [2024-04-26 15:03:30.931070] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.317 [2024-04-26 15:03:30.931084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.317 qpair failed and we were unable to recover it. 00:26:48.317 [2024-04-26 15:03:30.941159] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.317 [2024-04-26 15:03:30.941242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.317 [2024-04-26 15:03:30.941256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.317 [2024-04-26 15:03:30.941263] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.317 [2024-04-26 15:03:30.941269] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.317 [2024-04-26 15:03:30.941286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.317 qpair failed and we were unable to recover it. 00:26:48.317 [2024-04-26 15:03:30.951195] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.317 [2024-04-26 15:03:30.951242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.317 [2024-04-26 15:03:30.951256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.317 [2024-04-26 15:03:30.951263] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.317 [2024-04-26 15:03:30.951269] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.317 [2024-04-26 15:03:30.951282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.317 qpair failed and we were unable to recover it. 00:26:48.317 [2024-04-26 15:03:30.961210] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.318 [2024-04-26 15:03:30.961272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.318 [2024-04-26 15:03:30.961286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.318 [2024-04-26 15:03:30.961293] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.318 [2024-04-26 15:03:30.961299] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.318 [2024-04-26 15:03:30.961313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.318 qpair failed and we were unable to recover it. 00:26:48.318 [2024-04-26 15:03:30.971234] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.318 [2024-04-26 15:03:30.971278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.318 [2024-04-26 15:03:30.971292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.318 [2024-04-26 15:03:30.971298] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.318 [2024-04-26 15:03:30.971304] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.318 [2024-04-26 15:03:30.971317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.318 qpair failed and we were unable to recover it. 00:26:48.580 [2024-04-26 15:03:30.981271] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.580 [2024-04-26 15:03:30.981328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.580 [2024-04-26 15:03:30.981341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.580 [2024-04-26 15:03:30.981348] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.580 [2024-04-26 15:03:30.981354] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.580 [2024-04-26 15:03:30.981367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.580 qpair failed and we were unable to recover it. 00:26:48.580 [2024-04-26 15:03:30.991350] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.580 [2024-04-26 15:03:30.991401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.580 [2024-04-26 15:03:30.991418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.580 [2024-04-26 15:03:30.991425] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.580 [2024-04-26 15:03:30.991431] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.580 [2024-04-26 15:03:30.991444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.580 qpair failed and we were unable to recover it. 00:26:48.580 [2024-04-26 15:03:31.001299] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.580 [2024-04-26 15:03:31.001353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.580 [2024-04-26 15:03:31.001366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.580 [2024-04-26 15:03:31.001373] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.580 [2024-04-26 15:03:31.001379] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.580 [2024-04-26 15:03:31.001393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.580 qpair failed and we were unable to recover it. 00:26:48.580 [2024-04-26 15:03:31.011314] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.580 [2024-04-26 15:03:31.011360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.580 [2024-04-26 15:03:31.011374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.580 [2024-04-26 15:03:31.011381] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.580 [2024-04-26 15:03:31.011387] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.580 [2024-04-26 15:03:31.011401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.580 qpair failed and we were unable to recover it. 00:26:48.580 [2024-04-26 15:03:31.021380] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.580 [2024-04-26 15:03:31.021426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.580 [2024-04-26 15:03:31.021440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.580 [2024-04-26 15:03:31.021446] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.580 [2024-04-26 15:03:31.021452] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.580 [2024-04-26 15:03:31.021466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.580 qpair failed and we were unable to recover it. 00:26:48.580 [2024-04-26 15:03:31.031399] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.580 [2024-04-26 15:03:31.031449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.580 [2024-04-26 15:03:31.031462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.580 [2024-04-26 15:03:31.031469] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.580 [2024-04-26 15:03:31.031479] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.580 [2024-04-26 15:03:31.031492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.580 qpair failed and we were unable to recover it. 00:26:48.580 [2024-04-26 15:03:31.041431] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.581 [2024-04-26 15:03:31.041492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.581 [2024-04-26 15:03:31.041506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.581 [2024-04-26 15:03:31.041512] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.581 [2024-04-26 15:03:31.041518] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.581 [2024-04-26 15:03:31.041532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.581 qpair failed and we were unable to recover it. 00:26:48.581 [2024-04-26 15:03:31.051511] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.581 [2024-04-26 15:03:31.051578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.581 [2024-04-26 15:03:31.051592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.581 [2024-04-26 15:03:31.051599] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.581 [2024-04-26 15:03:31.051605] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.581 [2024-04-26 15:03:31.051618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.581 qpair failed and we were unable to recover it. 00:26:48.581 [2024-04-26 15:03:31.061506] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.581 [2024-04-26 15:03:31.061565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.581 [2024-04-26 15:03:31.061588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.581 [2024-04-26 15:03:31.061597] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.581 [2024-04-26 15:03:31.061604] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.581 [2024-04-26 15:03:31.061621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.581 qpair failed and we were unable to recover it. 00:26:48.581 [2024-04-26 15:03:31.071510] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.581 [2024-04-26 15:03:31.071566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.581 [2024-04-26 15:03:31.071590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.581 [2024-04-26 15:03:31.071598] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.581 [2024-04-26 15:03:31.071604] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.581 [2024-04-26 15:03:31.071623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.581 qpair failed and we were unable to recover it. 00:26:48.581 [2024-04-26 15:03:31.081541] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.581 [2024-04-26 15:03:31.081595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.581 [2024-04-26 15:03:31.081610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.581 [2024-04-26 15:03:31.081617] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.581 [2024-04-26 15:03:31.081623] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.581 [2024-04-26 15:03:31.081637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.581 qpair failed and we were unable to recover it. 00:26:48.581 [2024-04-26 15:03:31.091549] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.581 [2024-04-26 15:03:31.091602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.581 [2024-04-26 15:03:31.091625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.581 [2024-04-26 15:03:31.091633] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.581 [2024-04-26 15:03:31.091640] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.581 [2024-04-26 15:03:31.091658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.581 qpair failed and we were unable to recover it. 00:26:48.581 [2024-04-26 15:03:31.101475] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.581 [2024-04-26 15:03:31.101523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.581 [2024-04-26 15:03:31.101537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.581 [2024-04-26 15:03:31.101544] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.581 [2024-04-26 15:03:31.101551] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.581 [2024-04-26 15:03:31.101565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.581 qpair failed and we were unable to recover it. 00:26:48.581 [2024-04-26 15:03:31.111637] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.581 [2024-04-26 15:03:31.111684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.581 [2024-04-26 15:03:31.111698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.581 [2024-04-26 15:03:31.111705] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.581 [2024-04-26 15:03:31.111712] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.581 [2024-04-26 15:03:31.111725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.581 qpair failed and we were unable to recover it. 00:26:48.581 [2024-04-26 15:03:31.121657] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.581 [2024-04-26 15:03:31.121714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.581 [2024-04-26 15:03:31.121728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.581 [2024-04-26 15:03:31.121735] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.581 [2024-04-26 15:03:31.121745] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.581 [2024-04-26 15:03:31.121759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.581 qpair failed and we were unable to recover it. 00:26:48.581 [2024-04-26 15:03:31.131652] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.581 [2024-04-26 15:03:31.131698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.581 [2024-04-26 15:03:31.131712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.581 [2024-04-26 15:03:31.131719] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.581 [2024-04-26 15:03:31.131725] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.581 [2024-04-26 15:03:31.131739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.581 qpair failed and we were unable to recover it. 00:26:48.581 [2024-04-26 15:03:31.141692] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.581 [2024-04-26 15:03:31.141740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.581 [2024-04-26 15:03:31.141754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.581 [2024-04-26 15:03:31.141761] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.581 [2024-04-26 15:03:31.141767] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.581 [2024-04-26 15:03:31.141780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.581 qpair failed and we were unable to recover it. 00:26:48.581 [2024-04-26 15:03:31.151727] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.581 [2024-04-26 15:03:31.151820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.581 [2024-04-26 15:03:31.151834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.581 [2024-04-26 15:03:31.151845] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.581 [2024-04-26 15:03:31.151852] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.581 [2024-04-26 15:03:31.151865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.581 qpair failed and we were unable to recover it. 00:26:48.582 [2024-04-26 15:03:31.161804] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.582 [2024-04-26 15:03:31.161869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.582 [2024-04-26 15:03:31.161883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.582 [2024-04-26 15:03:31.161890] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.582 [2024-04-26 15:03:31.161896] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.582 [2024-04-26 15:03:31.161910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.582 qpair failed and we were unable to recover it. 00:26:48.582 [2024-04-26 15:03:31.171785] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.582 [2024-04-26 15:03:31.171834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.582 [2024-04-26 15:03:31.171852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.582 [2024-04-26 15:03:31.171859] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.582 [2024-04-26 15:03:31.171865] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.582 [2024-04-26 15:03:31.171878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.582 qpair failed and we were unable to recover it. 00:26:48.582 [2024-04-26 15:03:31.181820] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.582 [2024-04-26 15:03:31.181870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.582 [2024-04-26 15:03:31.181884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.582 [2024-04-26 15:03:31.181891] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.582 [2024-04-26 15:03:31.181897] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.582 [2024-04-26 15:03:31.181911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.582 qpair failed and we were unable to recover it. 00:26:48.582 [2024-04-26 15:03:31.191830] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.582 [2024-04-26 15:03:31.191908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.582 [2024-04-26 15:03:31.191922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.582 [2024-04-26 15:03:31.191928] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.582 [2024-04-26 15:03:31.191934] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.582 [2024-04-26 15:03:31.191948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.582 qpair failed and we were unable to recover it. 00:26:48.582 [2024-04-26 15:03:31.201865] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.582 [2024-04-26 15:03:31.201934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.582 [2024-04-26 15:03:31.201948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.582 [2024-04-26 15:03:31.201955] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.582 [2024-04-26 15:03:31.201961] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.582 [2024-04-26 15:03:31.201974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.582 qpair failed and we were unable to recover it. 00:26:48.582 [2024-04-26 15:03:31.211816] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.582 [2024-04-26 15:03:31.211869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.582 [2024-04-26 15:03:31.211884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.582 [2024-04-26 15:03:31.211895] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.582 [2024-04-26 15:03:31.211901] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.582 [2024-04-26 15:03:31.211916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.582 qpair failed and we were unable to recover it. 00:26:48.582 [2024-04-26 15:03:31.221918] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.582 [2024-04-26 15:03:31.221972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.582 [2024-04-26 15:03:31.221986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.582 [2024-04-26 15:03:31.221993] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.582 [2024-04-26 15:03:31.221999] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.582 [2024-04-26 15:03:31.222013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.582 qpair failed and we were unable to recover it. 00:26:48.582 [2024-04-26 15:03:31.231864] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.582 [2024-04-26 15:03:31.231916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.582 [2024-04-26 15:03:31.231929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.582 [2024-04-26 15:03:31.231936] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.582 [2024-04-26 15:03:31.231942] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.582 [2024-04-26 15:03:31.231955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.582 qpair failed and we were unable to recover it. 00:26:48.582 [2024-04-26 15:03:31.241871] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.582 [2024-04-26 15:03:31.241926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.582 [2024-04-26 15:03:31.241939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.582 [2024-04-26 15:03:31.241946] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.582 [2024-04-26 15:03:31.241952] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.582 [2024-04-26 15:03:31.241966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.582 qpair failed and we were unable to recover it. 00:26:48.846 [2024-04-26 15:03:31.251984] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.846 [2024-04-26 15:03:31.252050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.846 [2024-04-26 15:03:31.252064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.846 [2024-04-26 15:03:31.252071] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.846 [2024-04-26 15:03:31.252077] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.846 [2024-04-26 15:03:31.252091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.846 qpair failed and we were unable to recover it. 00:26:48.846 [2024-04-26 15:03:31.262029] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.846 [2024-04-26 15:03:31.262105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.846 [2024-04-26 15:03:31.262119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.846 [2024-04-26 15:03:31.262125] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.846 [2024-04-26 15:03:31.262132] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.846 [2024-04-26 15:03:31.262145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.846 qpair failed and we were unable to recover it. 00:26:48.846 [2024-04-26 15:03:31.272060] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.846 [2024-04-26 15:03:31.272112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.846 [2024-04-26 15:03:31.272126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.846 [2024-04-26 15:03:31.272132] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.846 [2024-04-26 15:03:31.272138] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.846 [2024-04-26 15:03:31.272152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.846 qpair failed and we were unable to recover it. 00:26:48.846 [2024-04-26 15:03:31.282055] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.846 [2024-04-26 15:03:31.282119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.846 [2024-04-26 15:03:31.282134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.846 [2024-04-26 15:03:31.282143] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.846 [2024-04-26 15:03:31.282151] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.846 [2024-04-26 15:03:31.282165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.846 qpair failed and we were unable to recover it. 00:26:48.846 [2024-04-26 15:03:31.292123] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.846 [2024-04-26 15:03:31.292171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.846 [2024-04-26 15:03:31.292186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.846 [2024-04-26 15:03:31.292193] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.846 [2024-04-26 15:03:31.292199] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.846 [2024-04-26 15:03:31.292212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.846 qpair failed and we were unable to recover it. 00:26:48.846 [2024-04-26 15:03:31.302132] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.846 [2024-04-26 15:03:31.302174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.846 [2024-04-26 15:03:31.302192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.846 [2024-04-26 15:03:31.302198] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.846 [2024-04-26 15:03:31.302204] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.846 [2024-04-26 15:03:31.302218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.846 qpair failed and we were unable to recover it. 00:26:48.846 [2024-04-26 15:03:31.312162] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.846 [2024-04-26 15:03:31.312212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.846 [2024-04-26 15:03:31.312225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.846 [2024-04-26 15:03:31.312232] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.846 [2024-04-26 15:03:31.312238] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.846 [2024-04-26 15:03:31.312252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.846 qpair failed and we were unable to recover it. 00:26:48.846 [2024-04-26 15:03:31.322202] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.846 [2024-04-26 15:03:31.322255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.846 [2024-04-26 15:03:31.322269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.846 [2024-04-26 15:03:31.322276] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.846 [2024-04-26 15:03:31.322282] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.847 [2024-04-26 15:03:31.322295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.847 qpair failed and we were unable to recover it. 00:26:48.847 [2024-04-26 15:03:31.332208] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.847 [2024-04-26 15:03:31.332252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.847 [2024-04-26 15:03:31.332265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.847 [2024-04-26 15:03:31.332272] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.847 [2024-04-26 15:03:31.332278] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.847 [2024-04-26 15:03:31.332291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.847 qpair failed and we were unable to recover it. 00:26:48.847 [2024-04-26 15:03:31.342245] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.847 [2024-04-26 15:03:31.342295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.847 [2024-04-26 15:03:31.342309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.847 [2024-04-26 15:03:31.342315] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.847 [2024-04-26 15:03:31.342321] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.847 [2024-04-26 15:03:31.342338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.847 qpair failed and we were unable to recover it. 00:26:48.847 [2024-04-26 15:03:31.352244] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.847 [2024-04-26 15:03:31.352300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.847 [2024-04-26 15:03:31.352314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.847 [2024-04-26 15:03:31.352321] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.847 [2024-04-26 15:03:31.352327] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.847 [2024-04-26 15:03:31.352340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.847 qpair failed and we were unable to recover it. 00:26:48.847 [2024-04-26 15:03:31.362311] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.847 [2024-04-26 15:03:31.362410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.847 [2024-04-26 15:03:31.362424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.847 [2024-04-26 15:03:31.362431] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.847 [2024-04-26 15:03:31.362437] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.847 [2024-04-26 15:03:31.362450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.847 qpair failed and we were unable to recover it. 00:26:48.847 [2024-04-26 15:03:31.372330] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.847 [2024-04-26 15:03:31.372414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.847 [2024-04-26 15:03:31.372428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.847 [2024-04-26 15:03:31.372435] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.847 [2024-04-26 15:03:31.372441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.847 [2024-04-26 15:03:31.372455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.847 qpair failed and we were unable to recover it. 00:26:48.847 [2024-04-26 15:03:31.382364] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.847 [2024-04-26 15:03:31.382413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.847 [2024-04-26 15:03:31.382427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.847 [2024-04-26 15:03:31.382434] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.847 [2024-04-26 15:03:31.382440] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.847 [2024-04-26 15:03:31.382453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.847 qpair failed and we were unable to recover it. 00:26:48.847 [2024-04-26 15:03:31.392382] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.847 [2024-04-26 15:03:31.392428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.847 [2024-04-26 15:03:31.392444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.847 [2024-04-26 15:03:31.392451] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.847 [2024-04-26 15:03:31.392457] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.847 [2024-04-26 15:03:31.392471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.847 qpair failed and we were unable to recover it. 00:26:48.847 [2024-04-26 15:03:31.402283] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.847 [2024-04-26 15:03:31.402338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.847 [2024-04-26 15:03:31.402352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.847 [2024-04-26 15:03:31.402358] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.847 [2024-04-26 15:03:31.402364] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.847 [2024-04-26 15:03:31.402378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.847 qpair failed and we were unable to recover it. 00:26:48.847 [2024-04-26 15:03:31.412309] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.847 [2024-04-26 15:03:31.412381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.847 [2024-04-26 15:03:31.412394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.847 [2024-04-26 15:03:31.412401] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.847 [2024-04-26 15:03:31.412407] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.847 [2024-04-26 15:03:31.412421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.847 qpair failed and we were unable to recover it. 00:26:48.847 [2024-04-26 15:03:31.422378] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.847 [2024-04-26 15:03:31.422438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.847 [2024-04-26 15:03:31.422452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.847 [2024-04-26 15:03:31.422459] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.847 [2024-04-26 15:03:31.422465] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.847 [2024-04-26 15:03:31.422478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.847 qpair failed and we were unable to recover it. 00:26:48.847 [2024-04-26 15:03:31.432542] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.847 [2024-04-26 15:03:31.432613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.847 [2024-04-26 15:03:31.432626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.847 [2024-04-26 15:03:31.432633] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.847 [2024-04-26 15:03:31.432639] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.847 [2024-04-26 15:03:31.432657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.847 qpair failed and we were unable to recover it. 00:26:48.847 [2024-04-26 15:03:31.442525] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.847 [2024-04-26 15:03:31.442600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.847 [2024-04-26 15:03:31.442613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.847 [2024-04-26 15:03:31.442620] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.847 [2024-04-26 15:03:31.442626] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.847 [2024-04-26 15:03:31.442639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.847 qpair failed and we were unable to recover it. 00:26:48.847 [2024-04-26 15:03:31.452524] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.847 [2024-04-26 15:03:31.452580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.847 [2024-04-26 15:03:31.452594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.847 [2024-04-26 15:03:31.452600] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.847 [2024-04-26 15:03:31.452606] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.847 [2024-04-26 15:03:31.452620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.847 qpair failed and we were unable to recover it. 00:26:48.848 [2024-04-26 15:03:31.462440] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.848 [2024-04-26 15:03:31.462492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.848 [2024-04-26 15:03:31.462506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.848 [2024-04-26 15:03:31.462512] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.848 [2024-04-26 15:03:31.462518] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.848 [2024-04-26 15:03:31.462532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.848 qpair failed and we were unable to recover it. 00:26:48.848 [2024-04-26 15:03:31.472597] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.848 [2024-04-26 15:03:31.472646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.848 [2024-04-26 15:03:31.472659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.848 [2024-04-26 15:03:31.472666] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.848 [2024-04-26 15:03:31.472672] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.848 [2024-04-26 15:03:31.472685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.848 qpair failed and we were unable to recover it. 00:26:48.848 [2024-04-26 15:03:31.482514] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.848 [2024-04-26 15:03:31.482578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.848 [2024-04-26 15:03:31.482602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.848 [2024-04-26 15:03:31.482610] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.848 [2024-04-26 15:03:31.482617] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.848 [2024-04-26 15:03:31.482635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.848 qpair failed and we were unable to recover it. 00:26:48.848 [2024-04-26 15:03:31.492644] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.848 [2024-04-26 15:03:31.492694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.848 [2024-04-26 15:03:31.492709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.848 [2024-04-26 15:03:31.492716] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.848 [2024-04-26 15:03:31.492722] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.848 [2024-04-26 15:03:31.492736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.848 qpair failed and we were unable to recover it. 00:26:48.848 [2024-04-26 15:03:31.502684] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.848 [2024-04-26 15:03:31.502735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.848 [2024-04-26 15:03:31.502749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.848 [2024-04-26 15:03:31.502756] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.848 [2024-04-26 15:03:31.502762] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:48.848 [2024-04-26 15:03:31.502776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.848 qpair failed and we were unable to recover it. 00:26:49.111 [2024-04-26 15:03:31.512701] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.111 [2024-04-26 15:03:31.512754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.111 [2024-04-26 15:03:31.512769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.111 [2024-04-26 15:03:31.512775] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.111 [2024-04-26 15:03:31.512782] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.111 [2024-04-26 15:03:31.512795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.111 qpair failed and we were unable to recover it. 00:26:49.111 [2024-04-26 15:03:31.522737] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.111 [2024-04-26 15:03:31.522790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.111 [2024-04-26 15:03:31.522804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.111 [2024-04-26 15:03:31.522811] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.111 [2024-04-26 15:03:31.522822] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.111 [2024-04-26 15:03:31.522835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.111 qpair failed and we were unable to recover it. 00:26:49.111 [2024-04-26 15:03:31.532767] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.111 [2024-04-26 15:03:31.532816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.111 [2024-04-26 15:03:31.532830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.111 [2024-04-26 15:03:31.532841] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.111 [2024-04-26 15:03:31.532848] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.111 [2024-04-26 15:03:31.532862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.111 qpair failed and we were unable to recover it. 00:26:49.111 [2024-04-26 15:03:31.542775] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.111 [2024-04-26 15:03:31.542825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.111 [2024-04-26 15:03:31.542844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.111 [2024-04-26 15:03:31.542851] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.111 [2024-04-26 15:03:31.542857] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.111 [2024-04-26 15:03:31.542872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.111 qpair failed and we were unable to recover it. 00:26:49.111 [2024-04-26 15:03:31.552860] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.111 [2024-04-26 15:03:31.552924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.111 [2024-04-26 15:03:31.552938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.111 [2024-04-26 15:03:31.552945] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.111 [2024-04-26 15:03:31.552952] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.111 [2024-04-26 15:03:31.552966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.111 qpair failed and we were unable to recover it. 00:26:49.111 [2024-04-26 15:03:31.562720] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.111 [2024-04-26 15:03:31.562773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.111 [2024-04-26 15:03:31.562788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.111 [2024-04-26 15:03:31.562794] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.111 [2024-04-26 15:03:31.562801] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.111 [2024-04-26 15:03:31.562820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.111 qpair failed and we were unable to recover it. 00:26:49.111 [2024-04-26 15:03:31.572874] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.111 [2024-04-26 15:03:31.572922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.111 [2024-04-26 15:03:31.572936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.111 [2024-04-26 15:03:31.572943] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.111 [2024-04-26 15:03:31.572949] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.111 [2024-04-26 15:03:31.572963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.111 qpair failed and we were unable to recover it. 00:26:49.111 [2024-04-26 15:03:31.582903] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.111 [2024-04-26 15:03:31.582953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.111 [2024-04-26 15:03:31.582967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.111 [2024-04-26 15:03:31.582974] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.111 [2024-04-26 15:03:31.582980] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.111 [2024-04-26 15:03:31.582994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.111 qpair failed and we were unable to recover it. 00:26:49.111 [2024-04-26 15:03:31.592927] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.111 [2024-04-26 15:03:31.593015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.111 [2024-04-26 15:03:31.593029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.111 [2024-04-26 15:03:31.593035] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.111 [2024-04-26 15:03:31.593041] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.111 [2024-04-26 15:03:31.593055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.111 qpair failed and we were unable to recover it. 00:26:49.111 [2024-04-26 15:03:31.602868] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.111 [2024-04-26 15:03:31.602924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.111 [2024-04-26 15:03:31.602938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.111 [2024-04-26 15:03:31.602944] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.111 [2024-04-26 15:03:31.602950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.111 [2024-04-26 15:03:31.602964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.111 qpair failed and we were unable to recover it. 00:26:49.111 [2024-04-26 15:03:31.612982] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.111 [2024-04-26 15:03:31.613058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.112 [2024-04-26 15:03:31.613072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.112 [2024-04-26 15:03:31.613083] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.112 [2024-04-26 15:03:31.613089] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.112 [2024-04-26 15:03:31.613103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.112 qpair failed and we were unable to recover it. 00:26:49.112 [2024-04-26 15:03:31.623001] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.112 [2024-04-26 15:03:31.623049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.112 [2024-04-26 15:03:31.623063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.112 [2024-04-26 15:03:31.623070] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.112 [2024-04-26 15:03:31.623076] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.112 [2024-04-26 15:03:31.623090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.112 qpair failed and we were unable to recover it. 00:26:49.112 [2024-04-26 15:03:31.633072] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.112 [2024-04-26 15:03:31.633122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.112 [2024-04-26 15:03:31.633136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.112 [2024-04-26 15:03:31.633143] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.112 [2024-04-26 15:03:31.633148] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.112 [2024-04-26 15:03:31.633162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.112 qpair failed and we were unable to recover it. 00:26:49.112 [2024-04-26 15:03:31.643026] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.112 [2024-04-26 15:03:31.643083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.112 [2024-04-26 15:03:31.643097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.112 [2024-04-26 15:03:31.643103] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.112 [2024-04-26 15:03:31.643110] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.112 [2024-04-26 15:03:31.643123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.112 qpair failed and we were unable to recover it. 00:26:49.112 [2024-04-26 15:03:31.652949] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.112 [2024-04-26 15:03:31.652995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.112 [2024-04-26 15:03:31.653009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.112 [2024-04-26 15:03:31.653016] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.112 [2024-04-26 15:03:31.653022] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.112 [2024-04-26 15:03:31.653036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.112 qpair failed and we were unable to recover it. 00:26:49.112 [2024-04-26 15:03:31.663067] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.112 [2024-04-26 15:03:31.663113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.112 [2024-04-26 15:03:31.663127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.112 [2024-04-26 15:03:31.663133] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.112 [2024-04-26 15:03:31.663140] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.112 [2024-04-26 15:03:31.663153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.112 qpair failed and we were unable to recover it. 00:26:49.112 [2024-04-26 15:03:31.673131] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.112 [2024-04-26 15:03:31.673181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.112 [2024-04-26 15:03:31.673194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.112 [2024-04-26 15:03:31.673201] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.112 [2024-04-26 15:03:31.673207] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.112 [2024-04-26 15:03:31.673220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.112 qpair failed and we were unable to recover it. 00:26:49.112 [2024-04-26 15:03:31.683030] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.112 [2024-04-26 15:03:31.683084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.112 [2024-04-26 15:03:31.683098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.112 [2024-04-26 15:03:31.683104] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.112 [2024-04-26 15:03:31.683110] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.112 [2024-04-26 15:03:31.683123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.112 qpair failed and we were unable to recover it. 00:26:49.112 [2024-04-26 15:03:31.693161] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.112 [2024-04-26 15:03:31.693205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.112 [2024-04-26 15:03:31.693219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.112 [2024-04-26 15:03:31.693225] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.112 [2024-04-26 15:03:31.693231] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.112 [2024-04-26 15:03:31.693245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.112 qpair failed and we were unable to recover it. 00:26:49.112 [2024-04-26 15:03:31.703279] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.112 [2024-04-26 15:03:31.703338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.112 [2024-04-26 15:03:31.703355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.112 [2024-04-26 15:03:31.703361] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.112 [2024-04-26 15:03:31.703367] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.112 [2024-04-26 15:03:31.703381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.112 qpair failed and we were unable to recover it. 00:26:49.112 [2024-04-26 15:03:31.713120] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.112 [2024-04-26 15:03:31.713170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.112 [2024-04-26 15:03:31.713184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.112 [2024-04-26 15:03:31.713191] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.112 [2024-04-26 15:03:31.713197] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.112 [2024-04-26 15:03:31.713211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.112 qpair failed and we were unable to recover it. 00:26:49.112 [2024-04-26 15:03:31.723290] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.112 [2024-04-26 15:03:31.723347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.112 [2024-04-26 15:03:31.723361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.112 [2024-04-26 15:03:31.723368] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.112 [2024-04-26 15:03:31.723374] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.112 [2024-04-26 15:03:31.723387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.112 qpair failed and we were unable to recover it. 00:26:49.112 [2024-04-26 15:03:31.733253] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.112 [2024-04-26 15:03:31.733298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.112 [2024-04-26 15:03:31.733312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.112 [2024-04-26 15:03:31.733319] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.112 [2024-04-26 15:03:31.733324] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.112 [2024-04-26 15:03:31.733338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.112 qpair failed and we were unable to recover it. 00:26:49.112 [2024-04-26 15:03:31.743314] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.112 [2024-04-26 15:03:31.743365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.113 [2024-04-26 15:03:31.743379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.113 [2024-04-26 15:03:31.743385] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.113 [2024-04-26 15:03:31.743391] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.113 [2024-04-26 15:03:31.743408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.113 qpair failed and we were unable to recover it. 00:26:49.113 [2024-04-26 15:03:31.753298] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.113 [2024-04-26 15:03:31.753383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.113 [2024-04-26 15:03:31.753397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.113 [2024-04-26 15:03:31.753403] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.113 [2024-04-26 15:03:31.753409] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.113 [2024-04-26 15:03:31.753423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.113 qpair failed and we were unable to recover it. 00:26:49.113 [2024-04-26 15:03:31.763381] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.113 [2024-04-26 15:03:31.763430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.113 [2024-04-26 15:03:31.763443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.113 [2024-04-26 15:03:31.763450] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.113 [2024-04-26 15:03:31.763456] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.113 [2024-04-26 15:03:31.763470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.113 qpair failed and we were unable to recover it. 00:26:49.113 [2024-04-26 15:03:31.773391] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.113 [2024-04-26 15:03:31.773439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.113 [2024-04-26 15:03:31.773453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.113 [2024-04-26 15:03:31.773459] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.113 [2024-04-26 15:03:31.773465] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.113 [2024-04-26 15:03:31.773480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.113 qpair failed and we were unable to recover it. 00:26:49.375 [2024-04-26 15:03:31.783461] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.375 [2024-04-26 15:03:31.783555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.375 [2024-04-26 15:03:31.783569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.375 [2024-04-26 15:03:31.783575] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.375 [2024-04-26 15:03:31.783582] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.375 [2024-04-26 15:03:31.783595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.375 qpair failed and we were unable to recover it. 00:26:49.375 [2024-04-26 15:03:31.793439] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.375 [2024-04-26 15:03:31.793491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.375 [2024-04-26 15:03:31.793508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.375 [2024-04-26 15:03:31.793515] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.376 [2024-04-26 15:03:31.793521] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.376 [2024-04-26 15:03:31.793535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.376 qpair failed and we were unable to recover it. 00:26:49.376 [2024-04-26 15:03:31.803469] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.376 [2024-04-26 15:03:31.803522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.376 [2024-04-26 15:03:31.803535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.376 [2024-04-26 15:03:31.803542] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.376 [2024-04-26 15:03:31.803548] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.376 [2024-04-26 15:03:31.803562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.376 qpair failed and we were unable to recover it. 00:26:49.376 [2024-04-26 15:03:31.813480] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.376 [2024-04-26 15:03:31.813531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.376 [2024-04-26 15:03:31.813555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.376 [2024-04-26 15:03:31.813563] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.376 [2024-04-26 15:03:31.813570] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.376 [2024-04-26 15:03:31.813589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.376 qpair failed and we were unable to recover it. 00:26:49.376 [2024-04-26 15:03:31.823536] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.376 [2024-04-26 15:03:31.823584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.376 [2024-04-26 15:03:31.823607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.376 [2024-04-26 15:03:31.823616] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.376 [2024-04-26 15:03:31.823623] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.376 [2024-04-26 15:03:31.823640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.376 qpair failed and we were unable to recover it. 00:26:49.376 [2024-04-26 15:03:31.833564] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.376 [2024-04-26 15:03:31.833622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.376 [2024-04-26 15:03:31.833646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.376 [2024-04-26 15:03:31.833655] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.376 [2024-04-26 15:03:31.833661] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.376 [2024-04-26 15:03:31.833683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.376 qpair failed and we were unable to recover it. 00:26:49.376 [2024-04-26 15:03:31.843590] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.376 [2024-04-26 15:03:31.843647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.376 [2024-04-26 15:03:31.843670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.376 [2024-04-26 15:03:31.843678] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.376 [2024-04-26 15:03:31.843685] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.376 [2024-04-26 15:03:31.843703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.376 qpair failed and we were unable to recover it. 00:26:49.376 [2024-04-26 15:03:31.853588] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.376 [2024-04-26 15:03:31.853659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.376 [2024-04-26 15:03:31.853675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.376 [2024-04-26 15:03:31.853683] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.376 [2024-04-26 15:03:31.853689] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.376 [2024-04-26 15:03:31.853704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.376 qpair failed and we were unable to recover it. 00:26:49.376 [2024-04-26 15:03:31.863660] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.376 [2024-04-26 15:03:31.863707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.376 [2024-04-26 15:03:31.863721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.376 [2024-04-26 15:03:31.863728] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.376 [2024-04-26 15:03:31.863734] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.376 [2024-04-26 15:03:31.863748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.376 qpair failed and we were unable to recover it. 00:26:49.376 [2024-04-26 15:03:31.873629] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.376 [2024-04-26 15:03:31.873677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.376 [2024-04-26 15:03:31.873691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.376 [2024-04-26 15:03:31.873697] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.376 [2024-04-26 15:03:31.873704] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.376 [2024-04-26 15:03:31.873717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.376 qpair failed and we were unable to recover it. 00:26:49.376 [2024-04-26 15:03:31.883698] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.376 [2024-04-26 15:03:31.883749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.376 [2024-04-26 15:03:31.883768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.376 [2024-04-26 15:03:31.883774] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.376 [2024-04-26 15:03:31.883780] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.376 [2024-04-26 15:03:31.883794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.376 qpair failed and we were unable to recover it. 00:26:49.376 [2024-04-26 15:03:31.893726] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.376 [2024-04-26 15:03:31.893778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.376 [2024-04-26 15:03:31.893792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.376 [2024-04-26 15:03:31.893799] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.376 [2024-04-26 15:03:31.893805] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.376 [2024-04-26 15:03:31.893818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.376 qpair failed and we were unable to recover it. 00:26:49.376 [2024-04-26 15:03:31.903747] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.376 [2024-04-26 15:03:31.903797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.376 [2024-04-26 15:03:31.903811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.376 [2024-04-26 15:03:31.903818] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.376 [2024-04-26 15:03:31.903824] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.376 [2024-04-26 15:03:31.903843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.376 qpair failed and we were unable to recover it. 00:26:49.376 [2024-04-26 15:03:31.913766] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.376 [2024-04-26 15:03:31.913813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.376 [2024-04-26 15:03:31.913827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.376 [2024-04-26 15:03:31.913834] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.376 [2024-04-26 15:03:31.913846] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.376 [2024-04-26 15:03:31.913861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.376 qpair failed and we were unable to recover it. 00:26:49.376 [2024-04-26 15:03:31.923774] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.376 [2024-04-26 15:03:31.923830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.376 [2024-04-26 15:03:31.923850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.376 [2024-04-26 15:03:31.923857] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.376 [2024-04-26 15:03:31.923866] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.376 [2024-04-26 15:03:31.923880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.377 qpair failed and we were unable to recover it. 00:26:49.377 [2024-04-26 15:03:31.933832] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.377 [2024-04-26 15:03:31.933884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.377 [2024-04-26 15:03:31.933898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.377 [2024-04-26 15:03:31.933905] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.377 [2024-04-26 15:03:31.933911] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.377 [2024-04-26 15:03:31.933925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.377 qpair failed and we were unable to recover it. 00:26:49.377 [2024-04-26 15:03:31.943861] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.377 [2024-04-26 15:03:31.943913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.377 [2024-04-26 15:03:31.943926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.377 [2024-04-26 15:03:31.943933] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.377 [2024-04-26 15:03:31.943939] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.377 [2024-04-26 15:03:31.943953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.377 qpair failed and we were unable to recover it. 00:26:49.377 [2024-04-26 15:03:31.953880] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.377 [2024-04-26 15:03:31.953931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.377 [2024-04-26 15:03:31.953947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.377 [2024-04-26 15:03:31.953954] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.377 [2024-04-26 15:03:31.953961] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.377 [2024-04-26 15:03:31.953978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.377 qpair failed and we were unable to recover it. 00:26:49.377 [2024-04-26 15:03:31.963793] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.377 [2024-04-26 15:03:31.963851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.377 [2024-04-26 15:03:31.963866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.377 [2024-04-26 15:03:31.963873] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.377 [2024-04-26 15:03:31.963879] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.377 [2024-04-26 15:03:31.963893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.377 qpair failed and we were unable to recover it. 00:26:49.377 [2024-04-26 15:03:31.973908] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.377 [2024-04-26 15:03:31.973972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.377 [2024-04-26 15:03:31.973987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.377 [2024-04-26 15:03:31.973994] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.377 [2024-04-26 15:03:31.974000] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.377 [2024-04-26 15:03:31.974014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.377 qpair failed and we were unable to recover it. 00:26:49.377 [2024-04-26 15:03:31.983937] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.377 [2024-04-26 15:03:31.983993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.377 [2024-04-26 15:03:31.984006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.377 [2024-04-26 15:03:31.984013] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.377 [2024-04-26 15:03:31.984019] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.377 [2024-04-26 15:03:31.984033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.377 qpair failed and we were unable to recover it. 00:26:49.377 [2024-04-26 15:03:31.993987] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.377 [2024-04-26 15:03:31.994038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.377 [2024-04-26 15:03:31.994053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.377 [2024-04-26 15:03:31.994059] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.377 [2024-04-26 15:03:31.994067] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.377 [2024-04-26 15:03:31.994080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.377 qpair failed and we were unable to recover it. 00:26:49.377 [2024-04-26 15:03:32.004031] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.377 [2024-04-26 15:03:32.004082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.377 [2024-04-26 15:03:32.004096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.377 [2024-04-26 15:03:32.004103] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.377 [2024-04-26 15:03:32.004109] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.377 [2024-04-26 15:03:32.004123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.377 qpair failed and we were unable to recover it. 00:26:49.377 [2024-04-26 15:03:32.013912] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.377 [2024-04-26 15:03:32.013964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.377 [2024-04-26 15:03:32.013979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.377 [2024-04-26 15:03:32.013989] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.377 [2024-04-26 15:03:32.013995] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.377 [2024-04-26 15:03:32.014009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.377 qpair failed and we were unable to recover it. 00:26:49.377 [2024-04-26 15:03:32.024088] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.377 [2024-04-26 15:03:32.024137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.377 [2024-04-26 15:03:32.024151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.377 [2024-04-26 15:03:32.024157] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.377 [2024-04-26 15:03:32.024163] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.377 [2024-04-26 15:03:32.024177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.377 qpair failed and we were unable to recover it. 00:26:49.377 [2024-04-26 15:03:32.034116] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.377 [2024-04-26 15:03:32.034164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.377 [2024-04-26 15:03:32.034178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.377 [2024-04-26 15:03:32.034185] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.377 [2024-04-26 15:03:32.034191] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.377 [2024-04-26 15:03:32.034204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.377 qpair failed and we were unable to recover it. 00:26:49.640 [2024-04-26 15:03:32.044191] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.640 [2024-04-26 15:03:32.044255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.640 [2024-04-26 15:03:32.044269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.640 [2024-04-26 15:03:32.044276] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.640 [2024-04-26 15:03:32.044282] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.640 [2024-04-26 15:03:32.044296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.640 qpair failed and we were unable to recover it. 00:26:49.640 [2024-04-26 15:03:32.054153] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.640 [2024-04-26 15:03:32.054202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.640 [2024-04-26 15:03:32.054216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.640 [2024-04-26 15:03:32.054223] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.640 [2024-04-26 15:03:32.054229] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.640 [2024-04-26 15:03:32.054243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.640 qpair failed and we were unable to recover it. 00:26:49.640 [2024-04-26 15:03:32.064170] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.641 [2024-04-26 15:03:32.064220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.641 [2024-04-26 15:03:32.064234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.641 [2024-04-26 15:03:32.064240] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.641 [2024-04-26 15:03:32.064246] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.641 [2024-04-26 15:03:32.064260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.641 qpair failed and we were unable to recover it. 00:26:49.641 [2024-04-26 15:03:32.074206] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.641 [2024-04-26 15:03:32.074254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.641 [2024-04-26 15:03:32.074268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.641 [2024-04-26 15:03:32.074274] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.641 [2024-04-26 15:03:32.074281] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.641 [2024-04-26 15:03:32.074294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.641 qpair failed and we were unable to recover it. 00:26:49.641 [2024-04-26 15:03:32.084103] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.641 [2024-04-26 15:03:32.084155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.641 [2024-04-26 15:03:32.084169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.641 [2024-04-26 15:03:32.084176] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.641 [2024-04-26 15:03:32.084182] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.641 [2024-04-26 15:03:32.084196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.641 qpair failed and we were unable to recover it. 00:26:49.641 [2024-04-26 15:03:32.094242] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.641 [2024-04-26 15:03:32.094291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.641 [2024-04-26 15:03:32.094305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.641 [2024-04-26 15:03:32.094312] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.641 [2024-04-26 15:03:32.094318] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.641 [2024-04-26 15:03:32.094331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.641 qpair failed and we were unable to recover it. 00:26:49.641 [2024-04-26 15:03:32.104284] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.641 [2024-04-26 15:03:32.104383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.641 [2024-04-26 15:03:32.104397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.641 [2024-04-26 15:03:32.104408] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.641 [2024-04-26 15:03:32.104414] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.641 [2024-04-26 15:03:32.104428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.641 qpair failed and we were unable to recover it. 00:26:49.641 [2024-04-26 15:03:32.114188] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.641 [2024-04-26 15:03:32.114239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.641 [2024-04-26 15:03:32.114253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.641 [2024-04-26 15:03:32.114260] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.641 [2024-04-26 15:03:32.114266] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.641 [2024-04-26 15:03:32.114279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.641 qpair failed and we were unable to recover it. 00:26:49.641 [2024-04-26 15:03:32.124341] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.641 [2024-04-26 15:03:32.124397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.641 [2024-04-26 15:03:32.124411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.641 [2024-04-26 15:03:32.124418] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.641 [2024-04-26 15:03:32.124424] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.641 [2024-04-26 15:03:32.124439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.641 qpair failed and we were unable to recover it. 00:26:49.641 [2024-04-26 15:03:32.134336] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.641 [2024-04-26 15:03:32.134383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.641 [2024-04-26 15:03:32.134397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.641 [2024-04-26 15:03:32.134404] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.641 [2024-04-26 15:03:32.134410] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.641 [2024-04-26 15:03:32.134424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.641 qpair failed and we were unable to recover it. 00:26:49.641 [2024-04-26 15:03:32.144442] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.641 [2024-04-26 15:03:32.144489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.641 [2024-04-26 15:03:32.144502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.641 [2024-04-26 15:03:32.144509] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.641 [2024-04-26 15:03:32.144515] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.641 [2024-04-26 15:03:32.144529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.641 qpair failed and we were unable to recover it. 00:26:49.641 [2024-04-26 15:03:32.154423] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.641 [2024-04-26 15:03:32.154471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.641 [2024-04-26 15:03:32.154486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.641 [2024-04-26 15:03:32.154492] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.641 [2024-04-26 15:03:32.154498] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.641 [2024-04-26 15:03:32.154512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.641 qpair failed and we were unable to recover it. 00:26:49.641 [2024-04-26 15:03:32.164447] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.641 [2024-04-26 15:03:32.164553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.641 [2024-04-26 15:03:32.164567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.642 [2024-04-26 15:03:32.164573] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.642 [2024-04-26 15:03:32.164579] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.642 [2024-04-26 15:03:32.164593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.642 qpair failed and we were unable to recover it. 00:26:49.642 [2024-04-26 15:03:32.174472] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.642 [2024-04-26 15:03:32.174530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.642 [2024-04-26 15:03:32.174544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.642 [2024-04-26 15:03:32.174551] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.642 [2024-04-26 15:03:32.174557] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.642 [2024-04-26 15:03:32.174571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.642 qpair failed and we were unable to recover it. 00:26:49.642 [2024-04-26 15:03:32.184445] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.642 [2024-04-26 15:03:32.184509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.642 [2024-04-26 15:03:32.184523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.642 [2024-04-26 15:03:32.184529] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.642 [2024-04-26 15:03:32.184535] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.642 [2024-04-26 15:03:32.184549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.642 qpair failed and we were unable to recover it. 00:26:49.642 [2024-04-26 15:03:32.194533] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.642 [2024-04-26 15:03:32.194612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.642 [2024-04-26 15:03:32.194629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.642 [2024-04-26 15:03:32.194635] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.642 [2024-04-26 15:03:32.194642] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.642 [2024-04-26 15:03:32.194655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.642 qpair failed and we were unable to recover it. 00:26:49.642 [2024-04-26 15:03:32.204443] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.642 [2024-04-26 15:03:32.204491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.642 [2024-04-26 15:03:32.204504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.642 [2024-04-26 15:03:32.204511] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.642 [2024-04-26 15:03:32.204517] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.642 [2024-04-26 15:03:32.204536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.642 qpair failed and we were unable to recover it. 00:26:49.642 [2024-04-26 15:03:32.214603] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.642 [2024-04-26 15:03:32.214656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.642 [2024-04-26 15:03:32.214670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.642 [2024-04-26 15:03:32.214677] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.642 [2024-04-26 15:03:32.214683] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.642 [2024-04-26 15:03:32.214697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.642 qpair failed and we were unable to recover it. 00:26:49.642 [2024-04-26 15:03:32.224607] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.642 [2024-04-26 15:03:32.224656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.642 [2024-04-26 15:03:32.224671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.642 [2024-04-26 15:03:32.224677] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.642 [2024-04-26 15:03:32.224683] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.642 [2024-04-26 15:03:32.224697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.642 qpair failed and we were unable to recover it. 00:26:49.642 [2024-04-26 15:03:32.234665] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.642 [2024-04-26 15:03:32.234713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.642 [2024-04-26 15:03:32.234727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.642 [2024-04-26 15:03:32.234734] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.642 [2024-04-26 15:03:32.234740] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.642 [2024-04-26 15:03:32.234757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.642 qpair failed and we were unable to recover it. 00:26:49.642 [2024-04-26 15:03:32.244662] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.642 [2024-04-26 15:03:32.244715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.642 [2024-04-26 15:03:32.244729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.642 [2024-04-26 15:03:32.244736] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.642 [2024-04-26 15:03:32.244742] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.642 [2024-04-26 15:03:32.244756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.642 qpair failed and we were unable to recover it. 00:26:49.642 [2024-04-26 15:03:32.254686] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.642 [2024-04-26 15:03:32.254734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.642 [2024-04-26 15:03:32.254748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.642 [2024-04-26 15:03:32.254755] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.642 [2024-04-26 15:03:32.254761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.642 [2024-04-26 15:03:32.254775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.642 qpair failed and we were unable to recover it. 00:26:49.642 [2024-04-26 15:03:32.264721] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.642 [2024-04-26 15:03:32.264770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.642 [2024-04-26 15:03:32.264784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.642 [2024-04-26 15:03:32.264791] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.642 [2024-04-26 15:03:32.264797] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.643 [2024-04-26 15:03:32.264811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.643 qpair failed and we were unable to recover it. 00:26:49.643 [2024-04-26 15:03:32.274731] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.643 [2024-04-26 15:03:32.274822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.643 [2024-04-26 15:03:32.274840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.643 [2024-04-26 15:03:32.274848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.643 [2024-04-26 15:03:32.274854] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.643 [2024-04-26 15:03:32.274868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.643 qpair failed and we were unable to recover it. 00:26:49.643 [2024-04-26 15:03:32.284781] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.643 [2024-04-26 15:03:32.284835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.643 [2024-04-26 15:03:32.284859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.643 [2024-04-26 15:03:32.284866] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.643 [2024-04-26 15:03:32.284872] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.643 [2024-04-26 15:03:32.284886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.643 qpair failed and we were unable to recover it. 00:26:49.643 [2024-04-26 15:03:32.294799] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.643 [2024-04-26 15:03:32.294850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.643 [2024-04-26 15:03:32.294864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.643 [2024-04-26 15:03:32.294871] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.643 [2024-04-26 15:03:32.294877] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.643 [2024-04-26 15:03:32.294891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.643 qpair failed and we were unable to recover it. 00:26:49.905 [2024-04-26 15:03:32.304822] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.905 [2024-04-26 15:03:32.304871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.905 [2024-04-26 15:03:32.304885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.905 [2024-04-26 15:03:32.304892] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.905 [2024-04-26 15:03:32.304898] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.905 [2024-04-26 15:03:32.304912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.905 qpair failed and we were unable to recover it. 00:26:49.905 [2024-04-26 15:03:32.314846] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.905 [2024-04-26 15:03:32.314897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.905 [2024-04-26 15:03:32.314911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.905 [2024-04-26 15:03:32.314917] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.905 [2024-04-26 15:03:32.314923] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.905 [2024-04-26 15:03:32.314937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.905 qpair failed and we were unable to recover it. 00:26:49.905 [2024-04-26 15:03:32.324790] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.905 [2024-04-26 15:03:32.324883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.905 [2024-04-26 15:03:32.324897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.905 [2024-04-26 15:03:32.324904] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.905 [2024-04-26 15:03:32.324914] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.905 [2024-04-26 15:03:32.324928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.905 qpair failed and we were unable to recover it. 00:26:49.905 [2024-04-26 15:03:32.334777] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.905 [2024-04-26 15:03:32.334824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.906 [2024-04-26 15:03:32.334843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.906 [2024-04-26 15:03:32.334850] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.906 [2024-04-26 15:03:32.334856] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.906 [2024-04-26 15:03:32.334870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.906 qpair failed and we were unable to recover it. 00:26:49.906 [2024-04-26 15:03:32.344950] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.906 [2024-04-26 15:03:32.345044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.906 [2024-04-26 15:03:32.345058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.906 [2024-04-26 15:03:32.345064] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.906 [2024-04-26 15:03:32.345071] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.906 [2024-04-26 15:03:32.345085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.906 qpair failed and we were unable to recover it. 00:26:49.906 [2024-04-26 15:03:32.354948] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.906 [2024-04-26 15:03:32.355000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.906 [2024-04-26 15:03:32.355015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.906 [2024-04-26 15:03:32.355021] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.906 [2024-04-26 15:03:32.355027] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.906 [2024-04-26 15:03:32.355041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.906 qpair failed and we were unable to recover it. 00:26:49.906 [2024-04-26 15:03:32.364960] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.906 [2024-04-26 15:03:32.365055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.906 [2024-04-26 15:03:32.365069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.906 [2024-04-26 15:03:32.365076] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.906 [2024-04-26 15:03:32.365082] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.906 [2024-04-26 15:03:32.365096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.906 qpair failed and we were unable to recover it. 00:26:49.906 [2024-04-26 15:03:32.375006] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.906 [2024-04-26 15:03:32.375059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.906 [2024-04-26 15:03:32.375073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.906 [2024-04-26 15:03:32.375079] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.906 [2024-04-26 15:03:32.375085] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.906 [2024-04-26 15:03:32.375099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.906 qpair failed and we were unable to recover it. 00:26:49.906 [2024-04-26 15:03:32.385030] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.906 [2024-04-26 15:03:32.385080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.906 [2024-04-26 15:03:32.385094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.906 [2024-04-26 15:03:32.385101] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.906 [2024-04-26 15:03:32.385107] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.906 [2024-04-26 15:03:32.385121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.906 qpair failed and we were unable to recover it. 00:26:49.906 [2024-04-26 15:03:32.395080] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.906 [2024-04-26 15:03:32.395181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.906 [2024-04-26 15:03:32.395195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.906 [2024-04-26 15:03:32.395201] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.906 [2024-04-26 15:03:32.395207] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.906 [2024-04-26 15:03:32.395221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.906 qpair failed and we were unable to recover it. 00:26:49.906 [2024-04-26 15:03:32.405114] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.906 [2024-04-26 15:03:32.405170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.906 [2024-04-26 15:03:32.405184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.906 [2024-04-26 15:03:32.405191] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.906 [2024-04-26 15:03:32.405197] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.906 [2024-04-26 15:03:32.405210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.906 qpair failed and we were unable to recover it. 00:26:49.906 [2024-04-26 15:03:32.415101] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.906 [2024-04-26 15:03:32.415163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.906 [2024-04-26 15:03:32.415176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.906 [2024-04-26 15:03:32.415186] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.906 [2024-04-26 15:03:32.415193] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.906 [2024-04-26 15:03:32.415206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.906 qpair failed and we were unable to recover it. 00:26:49.906 [2024-04-26 15:03:32.425135] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.906 [2024-04-26 15:03:32.425184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.906 [2024-04-26 15:03:32.425197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.906 [2024-04-26 15:03:32.425204] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.906 [2024-04-26 15:03:32.425210] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.906 [2024-04-26 15:03:32.425224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.906 qpair failed and we were unable to recover it. 00:26:49.906 [2024-04-26 15:03:32.435076] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.906 [2024-04-26 15:03:32.435127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.906 [2024-04-26 15:03:32.435141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.906 [2024-04-26 15:03:32.435148] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.906 [2024-04-26 15:03:32.435154] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.906 [2024-04-26 15:03:32.435167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.906 qpair failed and we were unable to recover it. 00:26:49.906 [2024-04-26 15:03:32.445195] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.906 [2024-04-26 15:03:32.445247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.906 [2024-04-26 15:03:32.445261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.906 [2024-04-26 15:03:32.445267] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.906 [2024-04-26 15:03:32.445273] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.906 [2024-04-26 15:03:32.445292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.906 qpair failed and we were unable to recover it. 00:26:49.906 [2024-04-26 15:03:32.455284] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.906 [2024-04-26 15:03:32.455354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.906 [2024-04-26 15:03:32.455368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.906 [2024-04-26 15:03:32.455374] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.906 [2024-04-26 15:03:32.455381] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.906 [2024-04-26 15:03:32.455394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.906 qpair failed and we were unable to recover it. 00:26:49.906 [2024-04-26 15:03:32.465324] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.906 [2024-04-26 15:03:32.465370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.906 [2024-04-26 15:03:32.465383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.906 [2024-04-26 15:03:32.465390] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.907 [2024-04-26 15:03:32.465396] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.907 [2024-04-26 15:03:32.465410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.907 qpair failed and we were unable to recover it. 00:26:49.907 [2024-04-26 15:03:32.475157] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.907 [2024-04-26 15:03:32.475206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.907 [2024-04-26 15:03:32.475219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.907 [2024-04-26 15:03:32.475226] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.907 [2024-04-26 15:03:32.475232] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.907 [2024-04-26 15:03:32.475246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.907 qpair failed and we were unable to recover it. 00:26:49.907 [2024-04-26 15:03:32.485318] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.907 [2024-04-26 15:03:32.485374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.907 [2024-04-26 15:03:32.485387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.907 [2024-04-26 15:03:32.485394] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.907 [2024-04-26 15:03:32.485400] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.907 [2024-04-26 15:03:32.485414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.907 qpair failed and we were unable to recover it. 00:26:49.907 [2024-04-26 15:03:32.495346] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.907 [2024-04-26 15:03:32.495396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.907 [2024-04-26 15:03:32.495410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.907 [2024-04-26 15:03:32.495416] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.907 [2024-04-26 15:03:32.495422] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.907 [2024-04-26 15:03:32.495436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.907 qpair failed and we were unable to recover it. 00:26:49.907 [2024-04-26 15:03:32.505418] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.907 [2024-04-26 15:03:32.505478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.907 [2024-04-26 15:03:32.505491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.907 [2024-04-26 15:03:32.505501] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.907 [2024-04-26 15:03:32.505507] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.907 [2024-04-26 15:03:32.505520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.907 qpair failed and we were unable to recover it. 00:26:49.907 [2024-04-26 15:03:32.515374] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.907 [2024-04-26 15:03:32.515425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.907 [2024-04-26 15:03:32.515439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.907 [2024-04-26 15:03:32.515446] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.907 [2024-04-26 15:03:32.515452] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.907 [2024-04-26 15:03:32.515466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.907 qpair failed and we were unable to recover it. 00:26:49.907 [2024-04-26 15:03:32.525437] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.907 [2024-04-26 15:03:32.525485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.907 [2024-04-26 15:03:32.525498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.907 [2024-04-26 15:03:32.525505] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.907 [2024-04-26 15:03:32.525511] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.907 [2024-04-26 15:03:32.525524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.907 qpair failed and we were unable to recover it. 00:26:49.907 [2024-04-26 15:03:32.535462] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.907 [2024-04-26 15:03:32.535510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.907 [2024-04-26 15:03:32.535523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.907 [2024-04-26 15:03:32.535530] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.907 [2024-04-26 15:03:32.535536] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.907 [2024-04-26 15:03:32.535549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.907 qpair failed and we were unable to recover it. 00:26:49.907 [2024-04-26 15:03:32.545483] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.907 [2024-04-26 15:03:32.545531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.907 [2024-04-26 15:03:32.545545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.907 [2024-04-26 15:03:32.545552] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.907 [2024-04-26 15:03:32.545558] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.907 [2024-04-26 15:03:32.545572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.907 qpair failed and we were unable to recover it. 00:26:49.907 [2024-04-26 15:03:32.555515] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.907 [2024-04-26 15:03:32.555559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.907 [2024-04-26 15:03:32.555572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.907 [2024-04-26 15:03:32.555580] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.907 [2024-04-26 15:03:32.555586] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.907 [2024-04-26 15:03:32.555599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.907 qpair failed and we were unable to recover it. 00:26:49.907 [2024-04-26 15:03:32.565540] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.907 [2024-04-26 15:03:32.565641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.907 [2024-04-26 15:03:32.565654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.907 [2024-04-26 15:03:32.565661] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.907 [2024-04-26 15:03:32.565667] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:49.907 [2024-04-26 15:03:32.565680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.907 qpair failed and we were unable to recover it. 00:26:50.169 [2024-04-26 15:03:32.575567] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.169 [2024-04-26 15:03:32.575620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.169 [2024-04-26 15:03:32.575634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.169 [2024-04-26 15:03:32.575641] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.169 [2024-04-26 15:03:32.575647] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.169 [2024-04-26 15:03:32.575660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.169 qpair failed and we were unable to recover it. 00:26:50.169 [2024-04-26 15:03:32.585620] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.169 [2024-04-26 15:03:32.585672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.169 [2024-04-26 15:03:32.585686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.169 [2024-04-26 15:03:32.585693] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.169 [2024-04-26 15:03:32.585699] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.169 [2024-04-26 15:03:32.585712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.169 qpair failed and we were unable to recover it. 00:26:50.169 [2024-04-26 15:03:32.595522] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.169 [2024-04-26 15:03:32.595572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.169 [2024-04-26 15:03:32.595590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.169 [2024-04-26 15:03:32.595597] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.169 [2024-04-26 15:03:32.595603] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.169 [2024-04-26 15:03:32.595622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.169 qpair failed and we were unable to recover it. 00:26:50.169 [2024-04-26 15:03:32.605673] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.169 [2024-04-26 15:03:32.605764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.169 [2024-04-26 15:03:32.605778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.169 [2024-04-26 15:03:32.605784] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.169 [2024-04-26 15:03:32.605791] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.169 [2024-04-26 15:03:32.605804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.169 qpair failed and we were unable to recover it. 00:26:50.169 [2024-04-26 15:03:32.615676] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.169 [2024-04-26 15:03:32.615723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.169 [2024-04-26 15:03:32.615738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.169 [2024-04-26 15:03:32.615746] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.169 [2024-04-26 15:03:32.615752] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.169 [2024-04-26 15:03:32.615766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.169 qpair failed and we were unable to recover it. 00:26:50.169 [2024-04-26 15:03:32.625768] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.169 [2024-04-26 15:03:32.625814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.169 [2024-04-26 15:03:32.625828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.169 [2024-04-26 15:03:32.625835] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.169 [2024-04-26 15:03:32.625848] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.169 [2024-04-26 15:03:32.625862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.169 qpair failed and we were unable to recover it. 00:26:50.169 [2024-04-26 15:03:32.635758] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.169 [2024-04-26 15:03:32.635845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.169 [2024-04-26 15:03:32.635859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.169 [2024-04-26 15:03:32.635866] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.169 [2024-04-26 15:03:32.635872] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.169 [2024-04-26 15:03:32.635889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.169 qpair failed and we were unable to recover it. 00:26:50.170 [2024-04-26 15:03:32.645779] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.170 [2024-04-26 15:03:32.645829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.170 [2024-04-26 15:03:32.645847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.170 [2024-04-26 15:03:32.645855] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.170 [2024-04-26 15:03:32.645861] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.170 [2024-04-26 15:03:32.645875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.170 qpair failed and we were unable to recover it. 00:26:50.170 [2024-04-26 15:03:32.655666] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.170 [2024-04-26 15:03:32.655717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.170 [2024-04-26 15:03:32.655731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.170 [2024-04-26 15:03:32.655738] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.170 [2024-04-26 15:03:32.655744] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.170 [2024-04-26 15:03:32.655758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.170 qpair failed and we were unable to recover it. 00:26:50.170 [2024-04-26 15:03:32.665913] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.170 [2024-04-26 15:03:32.665991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.170 [2024-04-26 15:03:32.666005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.170 [2024-04-26 15:03:32.666011] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.170 [2024-04-26 15:03:32.666017] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.170 [2024-04-26 15:03:32.666031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.170 qpair failed and we were unable to recover it. 00:26:50.170 [2024-04-26 15:03:32.675908] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.170 [2024-04-26 15:03:32.675959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.170 [2024-04-26 15:03:32.675973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.170 [2024-04-26 15:03:32.675980] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.170 [2024-04-26 15:03:32.675986] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.170 [2024-04-26 15:03:32.675999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.170 qpair failed and we were unable to recover it. 00:26:50.170 [2024-04-26 15:03:32.685944] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.170 [2024-04-26 15:03:32.686004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.170 [2024-04-26 15:03:32.686020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.170 [2024-04-26 15:03:32.686027] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.170 [2024-04-26 15:03:32.686033] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.170 [2024-04-26 15:03:32.686047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.170 qpair failed and we were unable to recover it. 00:26:50.170 [2024-04-26 15:03:32.695876] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.170 [2024-04-26 15:03:32.695927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.170 [2024-04-26 15:03:32.695941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.170 [2024-04-26 15:03:32.695948] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.170 [2024-04-26 15:03:32.695954] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.170 [2024-04-26 15:03:32.695968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.170 qpair failed and we were unable to recover it. 00:26:50.170 [2024-04-26 15:03:32.705941] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.170 [2024-04-26 15:03:32.705994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.170 [2024-04-26 15:03:32.706007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.170 [2024-04-26 15:03:32.706014] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.170 [2024-04-26 15:03:32.706020] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.170 [2024-04-26 15:03:32.706034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.170 qpair failed and we were unable to recover it. 00:26:50.170 [2024-04-26 15:03:32.715980] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.170 [2024-04-26 15:03:32.716027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.170 [2024-04-26 15:03:32.716041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.170 [2024-04-26 15:03:32.716047] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.170 [2024-04-26 15:03:32.716053] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.170 [2024-04-26 15:03:32.716067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.170 qpair failed and we were unable to recover it. 00:26:50.170 [2024-04-26 15:03:32.725993] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.170 [2024-04-26 15:03:32.726046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.170 [2024-04-26 15:03:32.726060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.170 [2024-04-26 15:03:32.726067] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.170 [2024-04-26 15:03:32.726076] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.170 [2024-04-26 15:03:32.726090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.170 qpair failed and we were unable to recover it. 00:26:50.170 [2024-04-26 15:03:32.735909] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.170 [2024-04-26 15:03:32.735960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.170 [2024-04-26 15:03:32.735975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.170 [2024-04-26 15:03:32.735982] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.170 [2024-04-26 15:03:32.735991] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.170 [2024-04-26 15:03:32.736005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.170 qpair failed and we were unable to recover it. 00:26:50.170 [2024-04-26 15:03:32.746057] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.170 [2024-04-26 15:03:32.746109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.170 [2024-04-26 15:03:32.746123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.170 [2024-04-26 15:03:32.746130] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.170 [2024-04-26 15:03:32.746136] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.170 [2024-04-26 15:03:32.746150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.170 qpair failed and we were unable to recover it. 00:26:50.170 [2024-04-26 15:03:32.756061] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.170 [2024-04-26 15:03:32.756109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.170 [2024-04-26 15:03:32.756122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.170 [2024-04-26 15:03:32.756129] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.170 [2024-04-26 15:03:32.756136] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.170 [2024-04-26 15:03:32.756149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.170 qpair failed and we were unable to recover it. 00:26:50.170 [2024-04-26 15:03:32.766124] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.170 [2024-04-26 15:03:32.766177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.170 [2024-04-26 15:03:32.766191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.170 [2024-04-26 15:03:32.766197] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.170 [2024-04-26 15:03:32.766203] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.170 [2024-04-26 15:03:32.766217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.170 qpair failed and we were unable to recover it. 00:26:50.170 [2024-04-26 15:03:32.776022] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.170 [2024-04-26 15:03:32.776070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.171 [2024-04-26 15:03:32.776084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.171 [2024-04-26 15:03:32.776090] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.171 [2024-04-26 15:03:32.776096] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.171 [2024-04-26 15:03:32.776110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.171 qpair failed and we were unable to recover it. 00:26:50.171 [2024-04-26 15:03:32.786156] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.171 [2024-04-26 15:03:32.786202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.171 [2024-04-26 15:03:32.786216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.171 [2024-04-26 15:03:32.786222] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.171 [2024-04-26 15:03:32.786228] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.171 [2024-04-26 15:03:32.786242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.171 qpair failed and we were unable to recover it. 00:26:50.171 [2024-04-26 15:03:32.796188] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.171 [2024-04-26 15:03:32.796239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.171 [2024-04-26 15:03:32.796252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.171 [2024-04-26 15:03:32.796259] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.171 [2024-04-26 15:03:32.796265] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.171 [2024-04-26 15:03:32.796279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.171 qpair failed and we were unable to recover it. 00:26:50.171 [2024-04-26 15:03:32.806212] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.171 [2024-04-26 15:03:32.806263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.171 [2024-04-26 15:03:32.806276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.171 [2024-04-26 15:03:32.806283] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.171 [2024-04-26 15:03:32.806289] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.171 [2024-04-26 15:03:32.806302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.171 qpair failed and we were unable to recover it. 00:26:50.171 [2024-04-26 15:03:32.816237] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.171 [2024-04-26 15:03:32.816283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.171 [2024-04-26 15:03:32.816297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.171 [2024-04-26 15:03:32.816304] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.171 [2024-04-26 15:03:32.816313] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.171 [2024-04-26 15:03:32.816327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.171 qpair failed and we were unable to recover it. 00:26:50.171 [2024-04-26 15:03:32.826255] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.171 [2024-04-26 15:03:32.826302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.171 [2024-04-26 15:03:32.826316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.171 [2024-04-26 15:03:32.826323] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.171 [2024-04-26 15:03:32.826329] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.171 [2024-04-26 15:03:32.826344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.171 qpair failed and we were unable to recover it. 00:26:50.434 [2024-04-26 15:03:32.836235] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.434 [2024-04-26 15:03:32.836288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.434 [2024-04-26 15:03:32.836301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.434 [2024-04-26 15:03:32.836308] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.434 [2024-04-26 15:03:32.836315] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.434 [2024-04-26 15:03:32.836328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.434 qpair failed and we were unable to recover it. 00:26:50.434 [2024-04-26 15:03:32.846281] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.434 [2024-04-26 15:03:32.846368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.434 [2024-04-26 15:03:32.846382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.434 [2024-04-26 15:03:32.846389] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.434 [2024-04-26 15:03:32.846395] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.434 [2024-04-26 15:03:32.846409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.434 qpair failed and we were unable to recover it. 00:26:50.434 [2024-04-26 15:03:32.856391] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.434 [2024-04-26 15:03:32.856461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.434 [2024-04-26 15:03:32.856474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.434 [2024-04-26 15:03:32.856481] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.434 [2024-04-26 15:03:32.856487] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.434 [2024-04-26 15:03:32.856501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.434 qpair failed and we were unable to recover it. 00:26:50.434 [2024-04-26 15:03:32.866361] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.434 [2024-04-26 15:03:32.866411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.434 [2024-04-26 15:03:32.866426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.434 [2024-04-26 15:03:32.866433] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.434 [2024-04-26 15:03:32.866440] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.434 [2024-04-26 15:03:32.866453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.434 qpair failed and we were unable to recover it. 00:26:50.435 [2024-04-26 15:03:32.876388] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.435 [2024-04-26 15:03:32.876493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.435 [2024-04-26 15:03:32.876507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.435 [2024-04-26 15:03:32.876514] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.435 [2024-04-26 15:03:32.876520] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.435 [2024-04-26 15:03:32.876533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.435 qpair failed and we were unable to recover it. 00:26:50.435 [2024-04-26 15:03:32.886297] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.435 [2024-04-26 15:03:32.886346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.435 [2024-04-26 15:03:32.886360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.435 [2024-04-26 15:03:32.886367] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.435 [2024-04-26 15:03:32.886374] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.435 [2024-04-26 15:03:32.886388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.435 qpair failed and we were unable to recover it. 00:26:50.435 [2024-04-26 15:03:32.896410] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.435 [2024-04-26 15:03:32.896456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.435 [2024-04-26 15:03:32.896470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.435 [2024-04-26 15:03:32.896477] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.435 [2024-04-26 15:03:32.896483] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.435 [2024-04-26 15:03:32.896498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.435 qpair failed and we were unable to recover it. 00:26:50.435 [2024-04-26 15:03:32.906461] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.435 [2024-04-26 15:03:32.906506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.435 [2024-04-26 15:03:32.906520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.435 [2024-04-26 15:03:32.906530] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.435 [2024-04-26 15:03:32.906536] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.435 [2024-04-26 15:03:32.906549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.435 qpair failed and we were unable to recover it. 00:26:50.435 [2024-04-26 15:03:32.916493] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.435 [2024-04-26 15:03:32.916542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.435 [2024-04-26 15:03:32.916556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.435 [2024-04-26 15:03:32.916563] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.435 [2024-04-26 15:03:32.916569] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.435 [2024-04-26 15:03:32.916583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.435 qpair failed and we were unable to recover it. 00:26:50.435 [2024-04-26 15:03:32.926545] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.435 [2024-04-26 15:03:32.926592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.435 [2024-04-26 15:03:32.926605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.435 [2024-04-26 15:03:32.926612] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.435 [2024-04-26 15:03:32.926618] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.435 [2024-04-26 15:03:32.926631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.435 qpair failed and we were unable to recover it. 00:26:50.435 [2024-04-26 15:03:32.936535] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.435 [2024-04-26 15:03:32.936586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.435 [2024-04-26 15:03:32.936600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.435 [2024-04-26 15:03:32.936606] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.435 [2024-04-26 15:03:32.936612] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.435 [2024-04-26 15:03:32.936626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.435 qpair failed and we were unable to recover it. 00:26:50.435 [2024-04-26 15:03:32.946576] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.435 [2024-04-26 15:03:32.946636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.435 [2024-04-26 15:03:32.946660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.435 [2024-04-26 15:03:32.946669] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.435 [2024-04-26 15:03:32.946675] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.435 [2024-04-26 15:03:32.946693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.435 qpair failed and we were unable to recover it. 00:26:50.435 [2024-04-26 15:03:32.956564] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.435 [2024-04-26 15:03:32.956615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.435 [2024-04-26 15:03:32.956632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.435 [2024-04-26 15:03:32.956639] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.435 [2024-04-26 15:03:32.956645] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.435 [2024-04-26 15:03:32.956661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.435 qpair failed and we were unable to recover it. 00:26:50.435 [2024-04-26 15:03:32.966634] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.435 [2024-04-26 15:03:32.966689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.435 [2024-04-26 15:03:32.966704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.435 [2024-04-26 15:03:32.966710] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.435 [2024-04-26 15:03:32.966717] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.435 [2024-04-26 15:03:32.966731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.435 qpair failed and we were unable to recover it. 00:26:50.435 [2024-04-26 15:03:32.976590] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.435 [2024-04-26 15:03:32.976638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.435 [2024-04-26 15:03:32.976651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.435 [2024-04-26 15:03:32.976658] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.435 [2024-04-26 15:03:32.976664] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.435 [2024-04-26 15:03:32.976678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.435 qpair failed and we were unable to recover it. 00:26:50.435 [2024-04-26 15:03:32.986702] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.435 [2024-04-26 15:03:32.986752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.435 [2024-04-26 15:03:32.986766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.435 [2024-04-26 15:03:32.986772] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.435 [2024-04-26 15:03:32.986779] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.435 [2024-04-26 15:03:32.986792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.435 qpair failed and we were unable to recover it. 00:26:50.435 [2024-04-26 15:03:32.996585] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.435 [2024-04-26 15:03:32.996634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.435 [2024-04-26 15:03:32.996651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.435 [2024-04-26 15:03:32.996658] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.435 [2024-04-26 15:03:32.996665] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.435 [2024-04-26 15:03:32.996678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.435 qpair failed and we were unable to recover it. 00:26:50.435 [2024-04-26 15:03:33.006749] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.436 [2024-04-26 15:03:33.006849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.436 [2024-04-26 15:03:33.006864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.436 [2024-04-26 15:03:33.006871] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.436 [2024-04-26 15:03:33.006877] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.436 [2024-04-26 15:03:33.006891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.436 qpair failed and we were unable to recover it. 00:26:50.436 [2024-04-26 15:03:33.016771] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.436 [2024-04-26 15:03:33.016823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.436 [2024-04-26 15:03:33.016843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.436 [2024-04-26 15:03:33.016850] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.436 [2024-04-26 15:03:33.016857] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.436 [2024-04-26 15:03:33.016871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.436 qpair failed and we were unable to recover it. 00:26:50.436 [2024-04-26 15:03:33.026846] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.436 [2024-04-26 15:03:33.026924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.436 [2024-04-26 15:03:33.026937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.436 [2024-04-26 15:03:33.026944] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.436 [2024-04-26 15:03:33.026950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.436 [2024-04-26 15:03:33.026964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.436 qpair failed and we were unable to recover it. 00:26:50.436 [2024-04-26 15:03:33.036817] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.436 [2024-04-26 15:03:33.036870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.436 [2024-04-26 15:03:33.036884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.436 [2024-04-26 15:03:33.036891] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.436 [2024-04-26 15:03:33.036897] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.436 [2024-04-26 15:03:33.036917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.436 qpair failed and we were unable to recover it. 00:26:50.436 [2024-04-26 15:03:33.046845] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.436 [2024-04-26 15:03:33.046902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.436 [2024-04-26 15:03:33.046916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.436 [2024-04-26 15:03:33.046922] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.436 [2024-04-26 15:03:33.046928] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.436 [2024-04-26 15:03:33.046942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.436 qpair failed and we were unable to recover it. 00:26:50.436 [2024-04-26 15:03:33.056984] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.436 [2024-04-26 15:03:33.057031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.436 [2024-04-26 15:03:33.057045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.436 [2024-04-26 15:03:33.057052] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.436 [2024-04-26 15:03:33.057058] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.436 [2024-04-26 15:03:33.057071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.436 qpair failed and we were unable to recover it. 00:26:50.436 [2024-04-26 15:03:33.066889] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.436 [2024-04-26 15:03:33.066937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.436 [2024-04-26 15:03:33.066951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.436 [2024-04-26 15:03:33.066957] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.436 [2024-04-26 15:03:33.066964] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.436 [2024-04-26 15:03:33.066977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.436 qpair failed and we were unable to recover it. 00:26:50.436 [2024-04-26 15:03:33.076935] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.436 [2024-04-26 15:03:33.076982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.436 [2024-04-26 15:03:33.076996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.436 [2024-04-26 15:03:33.077003] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.436 [2024-04-26 15:03:33.077009] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.436 [2024-04-26 15:03:33.077023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.436 qpair failed and we were unable to recover it. 00:26:50.436 [2024-04-26 15:03:33.087011] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.436 [2024-04-26 15:03:33.087063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.436 [2024-04-26 15:03:33.087080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.436 [2024-04-26 15:03:33.087087] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.436 [2024-04-26 15:03:33.087093] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.436 [2024-04-26 15:03:33.087106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.436 qpair failed and we were unable to recover it. 00:26:50.436 [2024-04-26 15:03:33.096993] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.436 [2024-04-26 15:03:33.097042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.436 [2024-04-26 15:03:33.097055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.436 [2024-04-26 15:03:33.097062] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.436 [2024-04-26 15:03:33.097068] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.436 [2024-04-26 15:03:33.097082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.436 qpair failed and we were unable to recover it. 00:26:50.699 [2024-04-26 15:03:33.106991] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.699 [2024-04-26 15:03:33.107040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.699 [2024-04-26 15:03:33.107053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.699 [2024-04-26 15:03:33.107060] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.699 [2024-04-26 15:03:33.107066] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.699 [2024-04-26 15:03:33.107080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.699 qpair failed and we were unable to recover it. 00:26:50.699 [2024-04-26 15:03:33.117045] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.699 [2024-04-26 15:03:33.117096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.699 [2024-04-26 15:03:33.117111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.699 [2024-04-26 15:03:33.117117] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.699 [2024-04-26 15:03:33.117123] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.699 [2024-04-26 15:03:33.117137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.699 qpair failed and we were unable to recover it. 00:26:50.699 [2024-04-26 15:03:33.127052] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.699 [2024-04-26 15:03:33.127109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.699 [2024-04-26 15:03:33.127122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.699 [2024-04-26 15:03:33.127129] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.699 [2024-04-26 15:03:33.127135] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0670000b90 00:26:50.699 [2024-04-26 15:03:33.127152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.699 qpair failed and we were unable to recover it. 00:26:50.699 Read completed with error (sct=0, sc=8) 00:26:50.699 starting I/O failed 00:26:50.699 Read completed with error (sct=0, sc=8) 00:26:50.699 starting I/O failed 00:26:50.699 Read completed with error (sct=0, sc=8) 00:26:50.699 starting I/O failed 00:26:50.699 Read completed with error (sct=0, sc=8) 00:26:50.699 starting I/O failed 00:26:50.699 Read completed with error (sct=0, sc=8) 00:26:50.699 starting I/O failed 00:26:50.699 Read completed with error (sct=0, sc=8) 00:26:50.699 starting I/O failed 00:26:50.699 Read completed with error (sct=0, sc=8) 00:26:50.699 starting I/O failed 00:26:50.699 Read completed with error (sct=0, sc=8) 00:26:50.699 starting I/O failed 00:26:50.699 Read completed with error (sct=0, sc=8) 00:26:50.699 starting I/O failed 00:26:50.699 Read completed with error (sct=0, sc=8) 00:26:50.699 starting I/O failed 00:26:50.699 Read completed with error (sct=0, sc=8) 00:26:50.699 starting I/O failed 00:26:50.699 Read completed with error (sct=0, sc=8) 00:26:50.699 starting I/O failed 00:26:50.699 Read completed with error (sct=0, sc=8) 00:26:50.699 starting I/O failed 00:26:50.699 Write completed with error (sct=0, sc=8) 00:26:50.699 starting I/O failed 00:26:50.699 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Write completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Write completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Write completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Write completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Write completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Write completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 [2024-04-26 15:03:33.128014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Write completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Write completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Write completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Write completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Write completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Write completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Write completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Write completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Write completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 [2024-04-26 15:03:33.128197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.700 [2024-04-26 15:03:33.137076] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.700 [2024-04-26 15:03:33.137122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.700 [2024-04-26 15:03:33.137135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.700 [2024-04-26 15:03:33.137141] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.700 [2024-04-26 15:03:33.137146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0678000b90 00:26:50.700 [2024-04-26 15:03:33.137158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.700 qpair failed and we were unable to recover it. 00:26:50.700 [2024-04-26 15:03:33.147113] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.700 [2024-04-26 15:03:33.147155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.700 [2024-04-26 15:03:33.147167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.700 [2024-04-26 15:03:33.147172] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.700 [2024-04-26 15:03:33.147176] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0678000b90 00:26:50.700 [2024-04-26 15:03:33.147187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.700 qpair failed and we were unable to recover it. 00:26:50.700 [2024-04-26 15:03:33.157162] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.700 [2024-04-26 15:03:33.157274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.700 [2024-04-26 15:03:33.157337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.700 [2024-04-26 15:03:33.157361] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.700 [2024-04-26 15:03:33.157382] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0680000b90 00:26:50.700 [2024-04-26 15:03:33.157433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:50.700 qpair failed and we were unable to recover it. 00:26:50.700 [2024-04-26 15:03:33.167150] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.700 [2024-04-26 15:03:33.167255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.700 [2024-04-26 15:03:33.167298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.700 [2024-04-26 15:03:33.167319] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.700 [2024-04-26 15:03:33.167338] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0680000b90 00:26:50.700 [2024-04-26 15:03:33.167380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:50.700 qpair failed and we were unable to recover it. 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Write completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Write completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Write completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Write completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Write completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Write completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Write completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.700 Read completed with error (sct=0, sc=8) 00:26:50.700 starting I/O failed 00:26:50.701 Read completed with error (sct=0, sc=8) 00:26:50.701 starting I/O failed 00:26:50.701 Read completed with error (sct=0, sc=8) 00:26:50.701 starting I/O failed 00:26:50.701 Write completed with error (sct=0, sc=8) 00:26:50.701 starting I/O failed 00:26:50.701 Write completed with error (sct=0, sc=8) 00:26:50.701 starting I/O failed 00:26:50.701 Read completed with error (sct=0, sc=8) 00:26:50.701 starting I/O failed 00:26:50.701 Write completed with error (sct=0, sc=8) 00:26:50.701 starting I/O failed 00:26:50.701 [2024-04-26 15:03:33.167819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.701 [2024-04-26 15:03:33.177220] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.701 [2024-04-26 15:03:33.177275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.701 [2024-04-26 15:03:33.177300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.701 [2024-04-26 15:03:33.177308] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.701 [2024-04-26 15:03:33.177315] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x638650 00:26:50.701 [2024-04-26 15:03:33.177332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.701 qpair failed and we were unable to recover it. 00:26:50.701 [2024-04-26 15:03:33.187209] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.701 [2024-04-26 15:03:33.187258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.701 [2024-04-26 15:03:33.187275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.701 [2024-04-26 15:03:33.187282] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.701 [2024-04-26 15:03:33.187289] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x638650 00:26:50.701 [2024-04-26 15:03:33.187303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.701 qpair failed and we were unable to recover it. 00:26:50.701 [2024-04-26 15:03:33.187666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x646160 is same with the state(5) to be set 00:26:50.701 [2024-04-26 15:03:33.187853] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x646160 (9): Bad file descriptor 00:26:50.701 Initializing NVMe Controllers 00:26:50.701 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:50.701 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:50.701 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:50.701 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:50.701 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:50.701 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:50.701 Initialization complete. Launching workers. 00:26:50.701 Starting thread on core 1 00:26:50.701 Starting thread on core 2 00:26:50.701 Starting thread on core 3 00:26:50.701 Starting thread on core 0 00:26:50.701 15:03:33 -- host/target_disconnect.sh@59 -- # sync 00:26:50.701 00:26:50.701 real 0m11.388s 00:26:50.701 user 0m21.274s 00:26:50.701 sys 0m3.803s 00:26:50.701 15:03:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:50.701 15:03:33 -- common/autotest_common.sh@10 -- # set +x 00:26:50.701 ************************************ 00:26:50.701 END TEST nvmf_target_disconnect_tc2 00:26:50.701 ************************************ 00:26:50.701 15:03:33 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:26:50.701 15:03:33 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:26:50.701 15:03:33 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:26:50.701 15:03:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:50.701 15:03:33 -- nvmf/common.sh@117 -- # sync 00:26:50.701 15:03:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:50.701 15:03:33 -- nvmf/common.sh@120 -- # set +e 00:26:50.701 15:03:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:50.701 15:03:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:50.701 rmmod nvme_tcp 00:26:50.701 rmmod nvme_fabrics 00:26:50.701 rmmod nvme_keyring 00:26:50.701 15:03:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:50.701 15:03:33 -- nvmf/common.sh@124 -- # set -e 00:26:50.701 15:03:33 -- nvmf/common.sh@125 -- # return 0 00:26:50.701 15:03:33 -- nvmf/common.sh@478 -- # '[' -n 1232014 ']' 00:26:50.701 15:03:33 -- nvmf/common.sh@479 -- # killprocess 1232014 00:26:50.701 15:03:33 -- common/autotest_common.sh@936 -- # '[' -z 1232014 ']' 00:26:50.701 15:03:33 -- common/autotest_common.sh@940 -- # kill -0 1232014 00:26:50.701 15:03:33 -- common/autotest_common.sh@941 -- # uname 00:26:50.701 15:03:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:50.701 15:03:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1232014 00:26:50.961 15:03:33 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:26:50.961 15:03:33 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:26:50.961 15:03:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1232014' 00:26:50.961 killing process with pid 1232014 00:26:50.961 15:03:33 -- common/autotest_common.sh@955 -- # kill 1232014 00:26:50.961 15:03:33 -- common/autotest_common.sh@960 -- # wait 1232014 00:26:50.961 15:03:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:50.961 15:03:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:50.961 15:03:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:50.961 15:03:33 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:50.961 15:03:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:50.961 15:03:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:50.961 15:03:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:50.961 15:03:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.506 15:03:35 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:53.506 00:26:53.506 real 0m21.382s 00:26:53.506 user 0m49.101s 00:26:53.506 sys 0m9.500s 00:26:53.506 15:03:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:53.506 15:03:35 -- common/autotest_common.sh@10 -- # set +x 00:26:53.506 ************************************ 00:26:53.506 END TEST nvmf_target_disconnect 00:26:53.506 ************************************ 00:26:53.506 15:03:35 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:26:53.506 15:03:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:53.506 15:03:35 -- common/autotest_common.sh@10 -- # set +x 00:26:53.506 15:03:35 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:26:53.506 00:26:53.506 real 19m39.898s 00:26:53.506 user 39m58.978s 00:26:53.506 sys 6m29.486s 00:26:53.506 15:03:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:53.506 15:03:35 -- common/autotest_common.sh@10 -- # set +x 00:26:53.506 ************************************ 00:26:53.506 END TEST nvmf_tcp 00:26:53.506 ************************************ 00:26:53.506 15:03:35 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:26:53.506 15:03:35 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:53.506 15:03:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:53.506 15:03:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:53.506 15:03:35 -- common/autotest_common.sh@10 -- # set +x 00:26:53.506 ************************************ 00:26:53.506 START TEST spdkcli_nvmf_tcp 00:26:53.506 ************************************ 00:26:53.506 15:03:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:53.506 * Looking for test storage... 00:26:53.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:26:53.506 15:03:35 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:26:53.506 15:03:35 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:26:53.506 15:03:35 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:26:53.506 15:03:35 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:53.506 15:03:35 -- nvmf/common.sh@7 -- # uname -s 00:26:53.506 15:03:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:53.506 15:03:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:53.506 15:03:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:53.506 15:03:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:53.507 15:03:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:53.507 15:03:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:53.507 15:03:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:53.507 15:03:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:53.507 15:03:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:53.507 15:03:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:53.507 15:03:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:53.507 15:03:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:53.507 15:03:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:53.507 15:03:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:53.507 15:03:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:53.507 15:03:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:53.507 15:03:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:53.507 15:03:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.507 15:03:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.507 15:03:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.507 15:03:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.507 15:03:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.507 15:03:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.507 15:03:35 -- paths/export.sh@5 -- # export PATH 00:26:53.507 15:03:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.507 15:03:35 -- nvmf/common.sh@47 -- # : 0 00:26:53.507 15:03:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:53.507 15:03:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:53.507 15:03:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:53.507 15:03:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:53.507 15:03:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:53.507 15:03:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:53.507 15:03:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:53.507 15:03:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:53.507 15:03:35 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:53.507 15:03:35 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:53.507 15:03:35 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:53.507 15:03:35 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:53.507 15:03:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:53.507 15:03:35 -- common/autotest_common.sh@10 -- # set +x 00:26:53.507 15:03:35 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:53.507 15:03:35 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1233907 00:26:53.507 15:03:35 -- spdkcli/common.sh@34 -- # waitforlisten 1233907 00:26:53.507 15:03:35 -- common/autotest_common.sh@817 -- # '[' -z 1233907 ']' 00:26:53.507 15:03:35 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:53.507 15:03:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.507 15:03:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:53.507 15:03:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.507 15:03:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:53.507 15:03:35 -- common/autotest_common.sh@10 -- # set +x 00:26:53.507 [2024-04-26 15:03:36.036804] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:53.507 [2024-04-26 15:03:36.036884] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233907 ] 00:26:53.507 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.507 [2024-04-26 15:03:36.100953] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:53.507 [2024-04-26 15:03:36.164473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.507 [2024-04-26 15:03:36.164474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.450 15:03:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:54.450 15:03:36 -- common/autotest_common.sh@850 -- # return 0 00:26:54.450 15:03:36 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:54.450 15:03:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:54.450 15:03:36 -- common/autotest_common.sh@10 -- # set +x 00:26:54.450 15:03:36 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:54.450 15:03:36 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:54.450 15:03:36 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:54.450 15:03:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:54.450 15:03:36 -- common/autotest_common.sh@10 -- # set +x 00:26:54.450 15:03:36 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:54.450 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:54.450 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:54.450 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:54.450 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:54.450 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:54.450 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:54.450 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:54.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:54.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:54.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:54.450 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:54.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:54.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:54.450 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:54.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:54.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:54.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:54.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:54.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:54.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:54.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:54.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:54.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:54.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:54.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:54.450 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:54.450 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:54.450 ' 00:26:54.712 [2024-04-26 15:03:37.160455] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:56.624 [2024-04-26 15:03:39.166638] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:58.005 [2024-04-26 15:03:40.330492] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:26:59.917 [2024-04-26 15:03:42.464806] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:01.836 [2024-04-26 15:03:44.298290] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:03.220 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:03.220 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:03.220 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:03.220 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:03.220 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:03.220 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:03.220 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:03.220 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:03.220 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:03.220 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:03.220 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:03.220 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:03.220 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:03.220 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:03.220 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:03.220 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:03.220 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:03.220 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:03.220 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:03.220 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:03.220 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:03.220 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:03.220 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:03.220 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:03.220 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:03.220 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:03.220 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:03.220 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:03.220 15:03:45 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:03.220 15:03:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:03.220 15:03:45 -- common/autotest_common.sh@10 -- # set +x 00:27:03.220 15:03:45 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:03.220 15:03:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:03.220 15:03:45 -- common/autotest_common.sh@10 -- # set +x 00:27:03.220 15:03:45 -- spdkcli/nvmf.sh@69 -- # check_match 00:27:03.220 15:03:45 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:03.791 15:03:46 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:03.791 15:03:46 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:03.791 15:03:46 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:03.791 15:03:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:03.791 15:03:46 -- common/autotest_common.sh@10 -- # set +x 00:27:03.791 15:03:46 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:03.791 15:03:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:03.791 15:03:46 -- common/autotest_common.sh@10 -- # set +x 00:27:03.791 15:03:46 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:03.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:03.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:03.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:03.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:03.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:03.791 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:03.791 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:03.791 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:03.791 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:03.791 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:03.791 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:03.791 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:03.791 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:03.791 ' 00:27:09.073 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:09.073 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:09.073 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:09.073 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:09.073 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:09.073 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:09.073 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:09.073 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:09.073 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:09.073 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:09.073 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:09.073 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:09.073 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:09.073 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:09.073 15:03:51 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:09.073 15:03:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:09.073 15:03:51 -- common/autotest_common.sh@10 -- # set +x 00:27:09.073 15:03:51 -- spdkcli/nvmf.sh@90 -- # killprocess 1233907 00:27:09.073 15:03:51 -- common/autotest_common.sh@936 -- # '[' -z 1233907 ']' 00:27:09.073 15:03:51 -- common/autotest_common.sh@940 -- # kill -0 1233907 00:27:09.073 15:03:51 -- common/autotest_common.sh@941 -- # uname 00:27:09.073 15:03:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:09.073 15:03:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1233907 00:27:09.073 15:03:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:09.073 15:03:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:09.073 15:03:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1233907' 00:27:09.073 killing process with pid 1233907 00:27:09.073 15:03:51 -- common/autotest_common.sh@955 -- # kill 1233907 00:27:09.073 [2024-04-26 15:03:51.237055] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:09.073 15:03:51 -- common/autotest_common.sh@960 -- # wait 1233907 00:27:09.073 15:03:51 -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:09.073 15:03:51 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:09.073 15:03:51 -- spdkcli/common.sh@13 -- # '[' -n 1233907 ']' 00:27:09.073 15:03:51 -- spdkcli/common.sh@14 -- # killprocess 1233907 00:27:09.073 15:03:51 -- common/autotest_common.sh@936 -- # '[' -z 1233907 ']' 00:27:09.073 15:03:51 -- common/autotest_common.sh@940 -- # kill -0 1233907 00:27:09.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1233907) - No such process 00:27:09.073 15:03:51 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1233907 is not found' 00:27:09.073 Process with pid 1233907 is not found 00:27:09.073 15:03:51 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:09.073 15:03:51 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:09.073 15:03:51 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:09.073 00:27:09.073 real 0m15.533s 00:27:09.073 user 0m31.947s 00:27:09.073 sys 0m0.687s 00:27:09.073 15:03:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:09.073 15:03:51 -- common/autotest_common.sh@10 -- # set +x 00:27:09.073 ************************************ 00:27:09.073 END TEST spdkcli_nvmf_tcp 00:27:09.073 ************************************ 00:27:09.073 15:03:51 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:09.073 15:03:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:09.073 15:03:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:09.073 15:03:51 -- common/autotest_common.sh@10 -- # set +x 00:27:09.073 ************************************ 00:27:09.073 START TEST nvmf_identify_passthru 00:27:09.073 ************************************ 00:27:09.073 15:03:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:09.073 * Looking for test storage... 00:27:09.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:09.073 15:03:51 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:09.073 15:03:51 -- nvmf/common.sh@7 -- # uname -s 00:27:09.073 15:03:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:09.073 15:03:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:09.073 15:03:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:09.073 15:03:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:09.073 15:03:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:09.073 15:03:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:09.073 15:03:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:09.073 15:03:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:09.073 15:03:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:09.073 15:03:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:09.073 15:03:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:09.073 15:03:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:09.073 15:03:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:09.073 15:03:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:09.073 15:03:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:09.073 15:03:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:09.073 15:03:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:09.073 15:03:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:09.073 15:03:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:09.073 15:03:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:09.073 15:03:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.073 15:03:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.073 15:03:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.073 15:03:51 -- paths/export.sh@5 -- # export PATH 00:27:09.073 15:03:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.073 15:03:51 -- nvmf/common.sh@47 -- # : 0 00:27:09.073 15:03:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:09.073 15:03:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:09.073 15:03:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:09.073 15:03:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:09.073 15:03:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:09.073 15:03:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:09.073 15:03:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:09.073 15:03:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:09.073 15:03:51 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:09.073 15:03:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:09.073 15:03:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:09.073 15:03:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:09.073 15:03:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.073 15:03:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.073 15:03:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.073 15:03:51 -- paths/export.sh@5 -- # export PATH 00:27:09.073 15:03:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.073 15:03:51 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:09.073 15:03:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:09.073 15:03:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:09.073 15:03:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:09.073 15:03:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:09.073 15:03:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:09.073 15:03:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.074 15:03:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:09.074 15:03:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.074 15:03:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:27:09.074 15:03:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:09.074 15:03:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:09.074 15:03:51 -- common/autotest_common.sh@10 -- # set +x 00:27:17.213 15:03:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:17.213 15:03:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:17.213 15:03:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:17.213 15:03:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:17.213 15:03:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:17.213 15:03:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:17.213 15:03:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:17.213 15:03:58 -- nvmf/common.sh@295 -- # net_devs=() 00:27:17.213 15:03:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:17.213 15:03:58 -- nvmf/common.sh@296 -- # e810=() 00:27:17.213 15:03:58 -- nvmf/common.sh@296 -- # local -ga e810 00:27:17.213 15:03:58 -- nvmf/common.sh@297 -- # x722=() 00:27:17.213 15:03:58 -- nvmf/common.sh@297 -- # local -ga x722 00:27:17.213 15:03:58 -- nvmf/common.sh@298 -- # mlx=() 00:27:17.213 15:03:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:17.213 15:03:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:17.213 15:03:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:17.213 15:03:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:17.213 15:03:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:17.213 15:03:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:17.213 15:03:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:17.213 15:03:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:17.213 15:03:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:17.213 15:03:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:17.213 15:03:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:17.213 15:03:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:17.213 15:03:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:17.213 15:03:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:17.213 15:03:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:17.213 15:03:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:17.213 15:03:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:17.213 15:03:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:17.213 15:03:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:17.213 15:03:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:17.213 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:17.213 15:03:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:17.213 15:03:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:17.213 15:03:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.213 15:03:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.213 15:03:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:17.213 15:03:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:17.213 15:03:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:17.213 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:17.213 15:03:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:17.213 15:03:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:17.213 15:03:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.213 15:03:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.213 15:03:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:17.213 15:03:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:17.213 15:03:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:17.213 15:03:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:17.213 15:03:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:17.213 15:03:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.213 15:03:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:17.213 15:03:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.213 15:03:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:17.213 Found net devices under 0000:31:00.0: cvl_0_0 00:27:17.213 15:03:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.213 15:03:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:17.213 15:03:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.213 15:03:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:17.213 15:03:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.213 15:03:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:17.213 Found net devices under 0000:31:00.1: cvl_0_1 00:27:17.213 15:03:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.213 15:03:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:17.213 15:03:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:17.213 15:03:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:17.213 15:03:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:17.213 15:03:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:17.213 15:03:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:17.213 15:03:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:17.213 15:03:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:17.213 15:03:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:17.213 15:03:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:17.213 15:03:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:17.213 15:03:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:17.213 15:03:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:17.214 15:03:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:17.214 15:03:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:17.214 15:03:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:17.214 15:03:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:17.214 15:03:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:17.214 15:03:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:17.214 15:03:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:17.214 15:03:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:17.214 15:03:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:17.214 15:03:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:17.214 15:03:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:17.214 15:03:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:17.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:17.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.534 ms 00:27:17.214 00:27:17.214 --- 10.0.0.2 ping statistics --- 00:27:17.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.214 rtt min/avg/max/mdev = 0.534/0.534/0.534/0.000 ms 00:27:17.214 15:03:58 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:17.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:17.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:27:17.214 00:27:17.214 --- 10.0.0.1 ping statistics --- 00:27:17.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.214 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:27:17.214 15:03:58 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:17.214 15:03:58 -- nvmf/common.sh@411 -- # return 0 00:27:17.214 15:03:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:17.214 15:03:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:17.214 15:03:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:17.214 15:03:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:17.214 15:03:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:17.214 15:03:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:17.214 15:03:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:17.214 15:03:58 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:17.214 15:03:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:17.214 15:03:58 -- common/autotest_common.sh@10 -- # set +x 00:27:17.214 15:03:58 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:17.214 15:03:58 -- common/autotest_common.sh@1510 -- # bdfs=() 00:27:17.214 15:03:58 -- common/autotest_common.sh@1510 -- # local bdfs 00:27:17.214 15:03:58 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:27:17.214 15:03:58 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:27:17.214 15:03:58 -- common/autotest_common.sh@1499 -- # bdfs=() 00:27:17.214 15:03:58 -- common/autotest_common.sh@1499 -- # local bdfs 00:27:17.214 15:03:58 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:17.214 15:03:58 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:17.214 15:03:58 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:27:17.214 15:03:58 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:27:17.214 15:03:58 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:65:00.0 00:27:17.214 15:03:58 -- common/autotest_common.sh@1513 -- # echo 0000:65:00.0 00:27:17.214 15:03:58 -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:27:17.214 15:03:58 -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:27:17.214 15:03:58 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:27:17.214 15:03:58 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:17.214 15:03:58 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:17.214 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.214 15:03:59 -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:27:17.214 15:03:59 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:27:17.214 15:03:59 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:17.214 15:03:59 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:17.214 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.214 15:03:59 -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:27:17.214 15:03:59 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:17.214 15:03:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:17.214 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:27:17.474 15:03:59 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:17.474 15:03:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:17.474 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:27:17.474 15:03:59 -- target/identify_passthru.sh@31 -- # nvmfpid=1240730 00:27:17.474 15:03:59 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:17.474 15:03:59 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:17.474 15:03:59 -- target/identify_passthru.sh@35 -- # waitforlisten 1240730 00:27:17.474 15:03:59 -- common/autotest_common.sh@817 -- # '[' -z 1240730 ']' 00:27:17.474 15:03:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.474 15:03:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:17.475 15:03:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.475 15:03:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:17.475 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:27:17.475 [2024-04-26 15:03:59.969026] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:27:17.475 [2024-04-26 15:03:59.969081] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:17.475 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.475 [2024-04-26 15:04:00.040652] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:17.475 [2024-04-26 15:04:00.112182] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:17.475 [2024-04-26 15:04:00.112230] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:17.475 [2024-04-26 15:04:00.112238] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:17.475 [2024-04-26 15:04:00.112245] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:17.475 [2024-04-26 15:04:00.112251] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:17.475 [2024-04-26 15:04:00.112398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.475 [2024-04-26 15:04:00.112506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:17.475 [2024-04-26 15:04:00.112663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.475 [2024-04-26 15:04:00.112664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:18.416 15:04:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:18.416 15:04:00 -- common/autotest_common.sh@850 -- # return 0 00:27:18.416 15:04:00 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:18.416 15:04:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.416 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:27:18.416 INFO: Log level set to 20 00:27:18.416 INFO: Requests: 00:27:18.416 { 00:27:18.416 "jsonrpc": "2.0", 00:27:18.416 "method": "nvmf_set_config", 00:27:18.416 "id": 1, 00:27:18.416 "params": { 00:27:18.416 "admin_cmd_passthru": { 00:27:18.416 "identify_ctrlr": true 00:27:18.416 } 00:27:18.416 } 00:27:18.416 } 00:27:18.416 00:27:18.416 INFO: response: 00:27:18.416 { 00:27:18.416 "jsonrpc": "2.0", 00:27:18.416 "id": 1, 00:27:18.416 "result": true 00:27:18.416 } 00:27:18.416 00:27:18.416 15:04:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.416 15:04:00 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:18.416 15:04:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.416 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:27:18.416 INFO: Setting log level to 20 00:27:18.416 INFO: Setting log level to 20 00:27:18.416 INFO: Log level set to 20 00:27:18.416 INFO: Log level set to 20 00:27:18.416 INFO: Requests: 00:27:18.416 { 00:27:18.416 "jsonrpc": "2.0", 00:27:18.416 "method": "framework_start_init", 00:27:18.416 "id": 1 00:27:18.416 } 00:27:18.416 00:27:18.416 INFO: Requests: 00:27:18.416 { 00:27:18.416 "jsonrpc": "2.0", 00:27:18.416 "method": "framework_start_init", 00:27:18.416 "id": 1 00:27:18.416 } 00:27:18.416 00:27:18.416 [2024-04-26 15:04:00.831592] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:18.416 INFO: response: 00:27:18.416 { 00:27:18.416 "jsonrpc": "2.0", 00:27:18.416 "id": 1, 00:27:18.416 "result": true 00:27:18.416 } 00:27:18.416 00:27:18.416 INFO: response: 00:27:18.416 { 00:27:18.416 "jsonrpc": "2.0", 00:27:18.416 "id": 1, 00:27:18.416 "result": true 00:27:18.416 } 00:27:18.416 00:27:18.416 15:04:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.416 15:04:00 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:18.416 15:04:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.416 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:27:18.416 INFO: Setting log level to 40 00:27:18.416 INFO: Setting log level to 40 00:27:18.416 INFO: Setting log level to 40 00:27:18.416 [2024-04-26 15:04:00.844869] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:18.416 15:04:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.416 15:04:00 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:18.416 15:04:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:18.416 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:27:18.416 15:04:00 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:27:18.416 15:04:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.416 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:27:18.677 Nvme0n1 00:27:18.677 15:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.677 15:04:01 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:18.677 15:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.677 15:04:01 -- common/autotest_common.sh@10 -- # set +x 00:27:18.677 15:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.677 15:04:01 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:18.677 15:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.677 15:04:01 -- common/autotest_common.sh@10 -- # set +x 00:27:18.677 15:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.677 15:04:01 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:18.677 15:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.677 15:04:01 -- common/autotest_common.sh@10 -- # set +x 00:27:18.677 [2024-04-26 15:04:01.228153] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:18.677 15:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.677 15:04:01 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:18.677 15:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.677 15:04:01 -- common/autotest_common.sh@10 -- # set +x 00:27:18.677 [2024-04-26 15:04:01.239967] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:18.677 [ 00:27:18.677 { 00:27:18.677 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:18.677 "subtype": "Discovery", 00:27:18.677 "listen_addresses": [], 00:27:18.677 "allow_any_host": true, 00:27:18.677 "hosts": [] 00:27:18.677 }, 00:27:18.677 { 00:27:18.677 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:18.677 "subtype": "NVMe", 00:27:18.677 "listen_addresses": [ 00:27:18.677 { 00:27:18.677 "transport": "TCP", 00:27:18.677 "trtype": "TCP", 00:27:18.677 "adrfam": "IPv4", 00:27:18.677 "traddr": "10.0.0.2", 00:27:18.677 "trsvcid": "4420" 00:27:18.677 } 00:27:18.677 ], 00:27:18.677 "allow_any_host": true, 00:27:18.677 "hosts": [], 00:27:18.677 "serial_number": "SPDK00000000000001", 00:27:18.677 "model_number": "SPDK bdev Controller", 00:27:18.677 "max_namespaces": 1, 00:27:18.677 "min_cntlid": 1, 00:27:18.677 "max_cntlid": 65519, 00:27:18.677 "namespaces": [ 00:27:18.677 { 00:27:18.677 "nsid": 1, 00:27:18.677 "bdev_name": "Nvme0n1", 00:27:18.677 "name": "Nvme0n1", 00:27:18.677 "nguid": "3634473052605494002538450000001F", 00:27:18.677 "uuid": "36344730-5260-5494-0025-38450000001f" 00:27:18.677 } 00:27:18.677 ] 00:27:18.677 } 00:27:18.677 ] 00:27:18.677 15:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.677 15:04:01 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:18.677 15:04:01 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:18.677 15:04:01 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:18.677 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.938 15:04:01 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:27:18.938 15:04:01 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:18.938 15:04:01 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:18.938 15:04:01 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:18.938 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.199 15:04:01 -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:27:19.199 15:04:01 -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:27:19.199 15:04:01 -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:27:19.199 15:04:01 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:19.199 15:04:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:19.199 15:04:01 -- common/autotest_common.sh@10 -- # set +x 00:27:19.199 15:04:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:19.199 15:04:01 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:19.199 15:04:01 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:19.199 15:04:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:19.199 15:04:01 -- nvmf/common.sh@117 -- # sync 00:27:19.199 15:04:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:19.199 15:04:01 -- nvmf/common.sh@120 -- # set +e 00:27:19.199 15:04:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:19.199 15:04:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:19.199 rmmod nvme_tcp 00:27:19.199 rmmod nvme_fabrics 00:27:19.199 rmmod nvme_keyring 00:27:19.199 15:04:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:19.199 15:04:01 -- nvmf/common.sh@124 -- # set -e 00:27:19.199 15:04:01 -- nvmf/common.sh@125 -- # return 0 00:27:19.199 15:04:01 -- nvmf/common.sh@478 -- # '[' -n 1240730 ']' 00:27:19.199 15:04:01 -- nvmf/common.sh@479 -- # killprocess 1240730 00:27:19.199 15:04:01 -- common/autotest_common.sh@936 -- # '[' -z 1240730 ']' 00:27:19.199 15:04:01 -- common/autotest_common.sh@940 -- # kill -0 1240730 00:27:19.199 15:04:01 -- common/autotest_common.sh@941 -- # uname 00:27:19.199 15:04:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:19.199 15:04:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1240730 00:27:19.460 15:04:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:19.460 15:04:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:19.460 15:04:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1240730' 00:27:19.460 killing process with pid 1240730 00:27:19.460 15:04:01 -- common/autotest_common.sh@955 -- # kill 1240730 00:27:19.460 [2024-04-26 15:04:01.870203] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:19.460 15:04:01 -- common/autotest_common.sh@960 -- # wait 1240730 00:27:19.724 15:04:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:19.724 15:04:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:19.724 15:04:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:19.724 15:04:02 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:19.724 15:04:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:19.724 15:04:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.724 15:04:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:19.724 15:04:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.674 15:04:04 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:21.674 00:27:21.674 real 0m12.659s 00:27:21.674 user 0m10.381s 00:27:21.674 sys 0m6.017s 00:27:21.674 15:04:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:21.674 15:04:04 -- common/autotest_common.sh@10 -- # set +x 00:27:21.674 ************************************ 00:27:21.674 END TEST nvmf_identify_passthru 00:27:21.674 ************************************ 00:27:21.674 15:04:04 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:21.674 15:04:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:21.674 15:04:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:21.674 15:04:04 -- common/autotest_common.sh@10 -- # set +x 00:27:21.936 ************************************ 00:27:21.936 START TEST nvmf_dif 00:27:21.936 ************************************ 00:27:21.936 15:04:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:21.936 * Looking for test storage... 00:27:21.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:21.936 15:04:04 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:21.936 15:04:04 -- nvmf/common.sh@7 -- # uname -s 00:27:21.936 15:04:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:21.936 15:04:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:21.936 15:04:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:21.936 15:04:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:21.936 15:04:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:21.936 15:04:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:21.936 15:04:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:21.936 15:04:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:21.936 15:04:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:21.936 15:04:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:21.936 15:04:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:21.936 15:04:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:21.936 15:04:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:21.936 15:04:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:21.936 15:04:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:21.936 15:04:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:21.936 15:04:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:21.936 15:04:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:21.936 15:04:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:21.936 15:04:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:21.936 15:04:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.936 15:04:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.936 15:04:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.936 15:04:04 -- paths/export.sh@5 -- # export PATH 00:27:21.936 15:04:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.936 15:04:04 -- nvmf/common.sh@47 -- # : 0 00:27:21.936 15:04:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:21.936 15:04:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:21.936 15:04:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:21.936 15:04:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:21.936 15:04:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:21.936 15:04:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:21.936 15:04:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:21.936 15:04:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:21.936 15:04:04 -- target/dif.sh@15 -- # NULL_META=16 00:27:21.936 15:04:04 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:21.936 15:04:04 -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:21.936 15:04:04 -- target/dif.sh@15 -- # NULL_DIF=1 00:27:21.936 15:04:04 -- target/dif.sh@135 -- # nvmftestinit 00:27:21.936 15:04:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:21.936 15:04:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:21.936 15:04:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:21.936 15:04:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:21.936 15:04:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:21.936 15:04:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.936 15:04:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:21.936 15:04:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.936 15:04:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:27:21.936 15:04:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:21.936 15:04:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:21.936 15:04:04 -- common/autotest_common.sh@10 -- # set +x 00:27:30.076 15:04:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:30.076 15:04:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:30.076 15:04:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:30.076 15:04:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:30.076 15:04:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:30.076 15:04:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:30.076 15:04:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:30.076 15:04:11 -- nvmf/common.sh@295 -- # net_devs=() 00:27:30.076 15:04:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:30.076 15:04:11 -- nvmf/common.sh@296 -- # e810=() 00:27:30.076 15:04:11 -- nvmf/common.sh@296 -- # local -ga e810 00:27:30.076 15:04:11 -- nvmf/common.sh@297 -- # x722=() 00:27:30.076 15:04:11 -- nvmf/common.sh@297 -- # local -ga x722 00:27:30.076 15:04:11 -- nvmf/common.sh@298 -- # mlx=() 00:27:30.076 15:04:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:30.076 15:04:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:30.076 15:04:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:30.076 15:04:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:30.076 15:04:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:30.076 15:04:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:30.076 15:04:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:30.076 15:04:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:30.076 15:04:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:30.076 15:04:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:30.076 15:04:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:30.076 15:04:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:30.076 15:04:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:30.076 15:04:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:30.076 15:04:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:30.076 15:04:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:30.076 15:04:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:30.076 15:04:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:30.076 15:04:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:30.076 15:04:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:30.076 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:30.076 15:04:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:30.076 15:04:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:30.076 15:04:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.076 15:04:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.076 15:04:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:30.076 15:04:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:30.076 15:04:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:30.076 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:30.076 15:04:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:30.076 15:04:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:30.076 15:04:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.076 15:04:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.076 15:04:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:30.076 15:04:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:30.076 15:04:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:30.076 15:04:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:30.076 15:04:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:30.076 15:04:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.076 15:04:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:30.076 15:04:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.076 15:04:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:30.076 Found net devices under 0000:31:00.0: cvl_0_0 00:27:30.076 15:04:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.076 15:04:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:30.076 15:04:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.076 15:04:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:30.076 15:04:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.076 15:04:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:30.076 Found net devices under 0000:31:00.1: cvl_0_1 00:27:30.076 15:04:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.076 15:04:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:30.076 15:04:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:30.076 15:04:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:30.076 15:04:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:30.076 15:04:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:30.076 15:04:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:30.076 15:04:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:30.076 15:04:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:30.076 15:04:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:30.076 15:04:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:30.076 15:04:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:30.076 15:04:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:30.076 15:04:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:30.076 15:04:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:30.076 15:04:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:30.076 15:04:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:30.076 15:04:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:30.076 15:04:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:30.076 15:04:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:30.076 15:04:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:30.076 15:04:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:30.076 15:04:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:30.076 15:04:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:30.076 15:04:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:30.076 15:04:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:30.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:30.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:27:30.076 00:27:30.076 --- 10.0.0.2 ping statistics --- 00:27:30.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.076 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:27:30.076 15:04:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:30.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:30.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:27:30.076 00:27:30.076 --- 10.0.0.1 ping statistics --- 00:27:30.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.076 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:27:30.076 15:04:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:30.076 15:04:11 -- nvmf/common.sh@411 -- # return 0 00:27:30.076 15:04:11 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:27:30.076 15:04:11 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:31.993 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:31.993 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:31.993 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:31.993 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:31.993 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:31.993 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:31.993 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:31.993 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:31.993 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:31.993 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:27:31.993 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:31.993 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:31.993 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:31.993 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:31.993 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:31.993 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:31.993 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:32.252 15:04:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:32.252 15:04:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:32.252 15:04:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:32.252 15:04:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:32.252 15:04:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:32.252 15:04:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:32.252 15:04:14 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:32.253 15:04:14 -- target/dif.sh@137 -- # nvmfappstart 00:27:32.253 15:04:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:32.253 15:04:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:32.513 15:04:14 -- common/autotest_common.sh@10 -- # set +x 00:27:32.513 15:04:14 -- nvmf/common.sh@470 -- # nvmfpid=1246925 00:27:32.513 15:04:14 -- nvmf/common.sh@471 -- # waitforlisten 1246925 00:27:32.513 15:04:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:32.513 15:04:14 -- common/autotest_common.sh@817 -- # '[' -z 1246925 ']' 00:27:32.513 15:04:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:32.513 15:04:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:32.513 15:04:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:32.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:32.513 15:04:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:32.513 15:04:14 -- common/autotest_common.sh@10 -- # set +x 00:27:32.513 [2024-04-26 15:04:14.977662] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:27:32.513 [2024-04-26 15:04:14.977724] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:32.513 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.513 [2024-04-26 15:04:15.048899] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.513 [2024-04-26 15:04:15.121582] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:32.513 [2024-04-26 15:04:15.121620] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:32.513 [2024-04-26 15:04:15.121628] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:32.513 [2024-04-26 15:04:15.121634] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:32.513 [2024-04-26 15:04:15.121640] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:32.513 [2024-04-26 15:04:15.121659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.083 15:04:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:33.083 15:04:15 -- common/autotest_common.sh@850 -- # return 0 00:27:33.083 15:04:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:33.083 15:04:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:33.083 15:04:15 -- common/autotest_common.sh@10 -- # set +x 00:27:33.345 15:04:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:33.345 15:04:15 -- target/dif.sh@139 -- # create_transport 00:27:33.345 15:04:15 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:33.345 15:04:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:33.345 15:04:15 -- common/autotest_common.sh@10 -- # set +x 00:27:33.345 [2024-04-26 15:04:15.788343] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:33.345 15:04:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:33.345 15:04:15 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:33.345 15:04:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:33.345 15:04:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:33.345 15:04:15 -- common/autotest_common.sh@10 -- # set +x 00:27:33.345 ************************************ 00:27:33.345 START TEST fio_dif_1_default 00:27:33.345 ************************************ 00:27:33.345 15:04:15 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:27:33.345 15:04:15 -- target/dif.sh@86 -- # create_subsystems 0 00:27:33.345 15:04:15 -- target/dif.sh@28 -- # local sub 00:27:33.345 15:04:15 -- target/dif.sh@30 -- # for sub in "$@" 00:27:33.345 15:04:15 -- target/dif.sh@31 -- # create_subsystem 0 00:27:33.345 15:04:15 -- target/dif.sh@18 -- # local sub_id=0 00:27:33.345 15:04:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:33.345 15:04:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:33.345 15:04:15 -- common/autotest_common.sh@10 -- # set +x 00:27:33.345 bdev_null0 00:27:33.345 15:04:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:33.345 15:04:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:33.345 15:04:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:33.345 15:04:15 -- common/autotest_common.sh@10 -- # set +x 00:27:33.345 15:04:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:33.345 15:04:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:33.345 15:04:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:33.345 15:04:15 -- common/autotest_common.sh@10 -- # set +x 00:27:33.345 15:04:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:33.345 15:04:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:33.345 15:04:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:33.345 15:04:15 -- common/autotest_common.sh@10 -- # set +x 00:27:33.345 [2024-04-26 15:04:15.992995] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:33.345 15:04:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:33.345 15:04:15 -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:33.345 15:04:15 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:33.345 15:04:15 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:33.345 15:04:15 -- nvmf/common.sh@521 -- # config=() 00:27:33.345 15:04:15 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:33.345 15:04:15 -- nvmf/common.sh@521 -- # local subsystem config 00:27:33.345 15:04:15 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:33.345 15:04:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:33.345 15:04:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:33.345 { 00:27:33.345 "params": { 00:27:33.345 "name": "Nvme$subsystem", 00:27:33.345 "trtype": "$TEST_TRANSPORT", 00:27:33.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.345 "adrfam": "ipv4", 00:27:33.345 "trsvcid": "$NVMF_PORT", 00:27:33.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.345 "hdgst": ${hdgst:-false}, 00:27:33.345 "ddgst": ${ddgst:-false} 00:27:33.345 }, 00:27:33.345 "method": "bdev_nvme_attach_controller" 00:27:33.345 } 00:27:33.345 EOF 00:27:33.345 )") 00:27:33.345 15:04:16 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:33.345 15:04:15 -- target/dif.sh@82 -- # gen_fio_conf 00:27:33.345 15:04:16 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:33.345 15:04:16 -- target/dif.sh@54 -- # local file 00:27:33.345 15:04:16 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:33.345 15:04:16 -- target/dif.sh@56 -- # cat 00:27:33.345 15:04:16 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:33.345 15:04:16 -- common/autotest_common.sh@1327 -- # shift 00:27:33.345 15:04:16 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:33.345 15:04:16 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:33.345 15:04:16 -- nvmf/common.sh@543 -- # cat 00:27:33.345 15:04:16 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:33.345 15:04:16 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:33.345 15:04:16 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:33.345 15:04:16 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:33.345 15:04:16 -- target/dif.sh@72 -- # (( file <= files )) 00:27:33.345 15:04:16 -- nvmf/common.sh@545 -- # jq . 00:27:33.651 15:04:16 -- nvmf/common.sh@546 -- # IFS=, 00:27:33.651 15:04:16 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:33.651 "params": { 00:27:33.651 "name": "Nvme0", 00:27:33.651 "trtype": "tcp", 00:27:33.651 "traddr": "10.0.0.2", 00:27:33.651 "adrfam": "ipv4", 00:27:33.651 "trsvcid": "4420", 00:27:33.651 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:33.651 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:33.651 "hdgst": false, 00:27:33.651 "ddgst": false 00:27:33.651 }, 00:27:33.651 "method": "bdev_nvme_attach_controller" 00:27:33.651 }' 00:27:33.651 15:04:16 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:33.651 15:04:16 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:33.651 15:04:16 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:33.651 15:04:16 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:33.651 15:04:16 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:33.651 15:04:16 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:33.651 15:04:16 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:33.651 15:04:16 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:33.651 15:04:16 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:33.651 15:04:16 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:33.925 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:33.925 fio-3.35 00:27:33.925 Starting 1 thread 00:27:33.925 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.152 00:27:46.153 filename0: (groupid=0, jobs=1): err= 0: pid=1247532: Fri Apr 26 15:04:27 2024 00:27:46.153 read: IOPS=186, BW=748KiB/s (766kB/s)(7504KiB/10034msec) 00:27:46.153 slat (nsec): min=5321, max=53495, avg=6007.48, stdev=1672.93 00:27:46.153 clat (usec): min=761, max=44296, avg=21376.73, stdev=20376.56 00:27:46.153 lat (usec): min=766, max=44331, avg=21382.74, stdev=20376.55 00:27:46.153 clat percentiles (usec): 00:27:46.153 | 1.00th=[ 824], 5.00th=[ 930], 10.00th=[ 947], 20.00th=[ 963], 00:27:46.153 | 30.00th=[ 979], 40.00th=[ 996], 50.00th=[41157], 60.00th=[41157], 00:27:46.153 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:27:46.153 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:27:46.153 | 99.99th=[44303] 00:27:46.153 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=748.80, stdev=30.09, samples=20 00:27:46.153 iops : min= 176, max= 192, avg=187.20, stdev= 7.52, samples=20 00:27:46.153 lat (usec) : 1000=42.86% 00:27:46.153 lat (msec) : 2=7.04%, 50=50.11% 00:27:46.153 cpu : usr=95.53%, sys=4.26%, ctx=11, majf=0, minf=233 00:27:46.153 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:46.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:46.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:46.153 issued rwts: total=1876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:46.153 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:46.153 00:27:46.153 Run status group 0 (all jobs): 00:27:46.153 READ: bw=748KiB/s (766kB/s), 748KiB/s-748KiB/s (766kB/s-766kB/s), io=7504KiB (7684kB), run=10034-10034msec 00:27:46.153 15:04:27 -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:46.153 15:04:27 -- target/dif.sh@43 -- # local sub 00:27:46.153 15:04:27 -- target/dif.sh@45 -- # for sub in "$@" 00:27:46.153 15:04:27 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:46.153 15:04:27 -- target/dif.sh@36 -- # local sub_id=0 00:27:46.153 15:04:27 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:46.153 15:04:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.153 15:04:27 -- common/autotest_common.sh@10 -- # set +x 00:27:46.153 15:04:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.153 15:04:27 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:46.153 15:04:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.153 15:04:27 -- common/autotest_common.sh@10 -- # set +x 00:27:46.153 15:04:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.153 00:27:46.153 real 0m11.291s 00:27:46.153 user 0m21.594s 00:27:46.153 sys 0m0.745s 00:27:46.153 15:04:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:46.153 15:04:27 -- common/autotest_common.sh@10 -- # set +x 00:27:46.153 ************************************ 00:27:46.153 END TEST fio_dif_1_default 00:27:46.153 ************************************ 00:27:46.153 15:04:27 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:46.153 15:04:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:46.153 15:04:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:46.153 15:04:27 -- common/autotest_common.sh@10 -- # set +x 00:27:46.153 ************************************ 00:27:46.153 START TEST fio_dif_1_multi_subsystems 00:27:46.153 ************************************ 00:27:46.153 15:04:27 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:27:46.153 15:04:27 -- target/dif.sh@92 -- # local files=1 00:27:46.153 15:04:27 -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:46.153 15:04:27 -- target/dif.sh@28 -- # local sub 00:27:46.153 15:04:27 -- target/dif.sh@30 -- # for sub in "$@" 00:27:46.153 15:04:27 -- target/dif.sh@31 -- # create_subsystem 0 00:27:46.153 15:04:27 -- target/dif.sh@18 -- # local sub_id=0 00:27:46.153 15:04:27 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:46.153 15:04:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.153 15:04:27 -- common/autotest_common.sh@10 -- # set +x 00:27:46.153 bdev_null0 00:27:46.153 15:04:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.153 15:04:27 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:46.153 15:04:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.153 15:04:27 -- common/autotest_common.sh@10 -- # set +x 00:27:46.153 15:04:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.153 15:04:27 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:46.153 15:04:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.153 15:04:27 -- common/autotest_common.sh@10 -- # set +x 00:27:46.153 15:04:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.153 15:04:27 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:46.153 15:04:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.153 15:04:27 -- common/autotest_common.sh@10 -- # set +x 00:27:46.153 [2024-04-26 15:04:27.477481] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:46.153 15:04:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.153 15:04:27 -- target/dif.sh@30 -- # for sub in "$@" 00:27:46.153 15:04:27 -- target/dif.sh@31 -- # create_subsystem 1 00:27:46.153 15:04:27 -- target/dif.sh@18 -- # local sub_id=1 00:27:46.153 15:04:27 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:46.153 15:04:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.153 15:04:27 -- common/autotest_common.sh@10 -- # set +x 00:27:46.153 bdev_null1 00:27:46.153 15:04:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.153 15:04:27 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:46.153 15:04:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.153 15:04:27 -- common/autotest_common.sh@10 -- # set +x 00:27:46.153 15:04:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.153 15:04:27 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:46.153 15:04:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.153 15:04:27 -- common/autotest_common.sh@10 -- # set +x 00:27:46.153 15:04:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.153 15:04:27 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:46.153 15:04:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.153 15:04:27 -- common/autotest_common.sh@10 -- # set +x 00:27:46.153 15:04:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.153 15:04:27 -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:46.153 15:04:27 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:46.153 15:04:27 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:46.153 15:04:27 -- nvmf/common.sh@521 -- # config=() 00:27:46.153 15:04:27 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:46.153 15:04:27 -- nvmf/common.sh@521 -- # local subsystem config 00:27:46.153 15:04:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:46.153 15:04:27 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:46.153 15:04:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:46.153 { 00:27:46.153 "params": { 00:27:46.153 "name": "Nvme$subsystem", 00:27:46.153 "trtype": "$TEST_TRANSPORT", 00:27:46.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.153 "adrfam": "ipv4", 00:27:46.153 "trsvcid": "$NVMF_PORT", 00:27:46.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.153 "hdgst": ${hdgst:-false}, 00:27:46.153 "ddgst": ${ddgst:-false} 00:27:46.153 }, 00:27:46.153 "method": "bdev_nvme_attach_controller" 00:27:46.153 } 00:27:46.153 EOF 00:27:46.153 )") 00:27:46.153 15:04:27 -- target/dif.sh@82 -- # gen_fio_conf 00:27:46.153 15:04:27 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:46.153 15:04:27 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:46.153 15:04:27 -- target/dif.sh@54 -- # local file 00:27:46.153 15:04:27 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:46.153 15:04:27 -- target/dif.sh@56 -- # cat 00:27:46.153 15:04:27 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:46.153 15:04:27 -- common/autotest_common.sh@1327 -- # shift 00:27:46.153 15:04:27 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:46.153 15:04:27 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:46.153 15:04:27 -- nvmf/common.sh@543 -- # cat 00:27:46.153 15:04:27 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:46.153 15:04:27 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:46.153 15:04:27 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:46.153 15:04:27 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:46.153 15:04:27 -- target/dif.sh@72 -- # (( file <= files )) 00:27:46.153 15:04:27 -- target/dif.sh@73 -- # cat 00:27:46.153 15:04:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:46.153 15:04:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:46.153 { 00:27:46.153 "params": { 00:27:46.153 "name": "Nvme$subsystem", 00:27:46.153 "trtype": "$TEST_TRANSPORT", 00:27:46.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.153 "adrfam": "ipv4", 00:27:46.153 "trsvcid": "$NVMF_PORT", 00:27:46.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.154 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.154 "hdgst": ${hdgst:-false}, 00:27:46.154 "ddgst": ${ddgst:-false} 00:27:46.154 }, 00:27:46.154 "method": "bdev_nvme_attach_controller" 00:27:46.154 } 00:27:46.154 EOF 00:27:46.154 )") 00:27:46.154 15:04:27 -- target/dif.sh@72 -- # (( file++ )) 00:27:46.154 15:04:27 -- target/dif.sh@72 -- # (( file <= files )) 00:27:46.154 15:04:27 -- nvmf/common.sh@543 -- # cat 00:27:46.154 15:04:27 -- nvmf/common.sh@545 -- # jq . 00:27:46.154 15:04:27 -- nvmf/common.sh@546 -- # IFS=, 00:27:46.154 15:04:27 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:46.154 "params": { 00:27:46.154 "name": "Nvme0", 00:27:46.154 "trtype": "tcp", 00:27:46.154 "traddr": "10.0.0.2", 00:27:46.154 "adrfam": "ipv4", 00:27:46.154 "trsvcid": "4420", 00:27:46.154 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:46.154 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:46.154 "hdgst": false, 00:27:46.154 "ddgst": false 00:27:46.154 }, 00:27:46.154 "method": "bdev_nvme_attach_controller" 00:27:46.154 },{ 00:27:46.154 "params": { 00:27:46.154 "name": "Nvme1", 00:27:46.154 "trtype": "tcp", 00:27:46.154 "traddr": "10.0.0.2", 00:27:46.154 "adrfam": "ipv4", 00:27:46.154 "trsvcid": "4420", 00:27:46.154 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:46.154 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:46.154 "hdgst": false, 00:27:46.154 "ddgst": false 00:27:46.154 }, 00:27:46.154 "method": "bdev_nvme_attach_controller" 00:27:46.154 }' 00:27:46.154 15:04:27 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:46.154 15:04:27 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:46.154 15:04:27 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:46.154 15:04:27 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:46.154 15:04:27 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:46.154 15:04:27 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:46.154 15:04:27 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:46.154 15:04:27 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:46.154 15:04:27 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:46.154 15:04:27 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:46.154 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:46.154 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:46.154 fio-3.35 00:27:46.154 Starting 2 threads 00:27:46.154 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.394 00:27:58.394 filename0: (groupid=0, jobs=1): err= 0: pid=1249906: Fri Apr 26 15:04:38 2024 00:27:58.394 read: IOPS=95, BW=382KiB/s (391kB/s)(3824KiB/10020msec) 00:27:58.394 slat (nsec): min=5333, max=40154, avg=6397.98, stdev=1946.40 00:27:58.394 clat (usec): min=40952, max=42521, avg=41905.05, stdev=264.24 00:27:58.394 lat (usec): min=40960, max=42554, avg=41911.45, stdev=264.36 00:27:58.394 clat percentiles (usec): 00:27:58.394 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[42206], 00:27:58.394 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:27:58.394 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:27:58.394 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:27:58.394 | 99.99th=[42730] 00:27:58.394 bw ( KiB/s): min= 352, max= 384, per=33.81%, avg=380.80, stdev= 9.85, samples=20 00:27:58.394 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:27:58.394 lat (msec) : 50=100.00% 00:27:58.394 cpu : usr=96.97%, sys=2.83%, ctx=12, majf=0, minf=129 00:27:58.394 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:58.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.394 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.394 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:58.394 filename1: (groupid=0, jobs=1): err= 0: pid=1249907: Fri Apr 26 15:04:38 2024 00:27:58.394 read: IOPS=185, BW=743KiB/s (761kB/s)(7456KiB/10035msec) 00:27:58.394 slat (nsec): min=5321, max=32700, avg=5561.12, stdev=882.15 00:27:58.394 clat (usec): min=749, max=42534, avg=21518.12, stdev=20408.23 00:27:58.394 lat (usec): min=754, max=42566, avg=21523.68, stdev=20408.21 00:27:58.394 clat percentiles (usec): 00:27:58.394 | 1.00th=[ 816], 5.00th=[ 938], 10.00th=[ 955], 20.00th=[ 971], 00:27:58.394 | 30.00th=[ 1020], 40.00th=[ 1139], 50.00th=[41157], 60.00th=[41681], 00:27:58.394 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:27:58.394 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:27:58.394 | 99.99th=[42730] 00:27:58.394 bw ( KiB/s): min= 672, max= 768, per=66.19%, avg=744.00, stdev=32.63, samples=20 00:27:58.394 iops : min= 168, max= 192, avg=186.00, stdev= 8.16, samples=20 00:27:58.394 lat (usec) : 750=0.05%, 1000=27.47% 00:27:58.394 lat (msec) : 2=22.26%, 50=50.21% 00:27:58.394 cpu : usr=96.93%, sys=2.87%, ctx=11, majf=0, minf=140 00:27:58.394 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:58.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:58.394 issued rwts: total=1864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:58.394 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:58.394 00:27:58.394 Run status group 0 (all jobs): 00:27:58.394 READ: bw=1124KiB/s (1151kB/s), 382KiB/s-743KiB/s (391kB/s-761kB/s), io=11.0MiB (11.6MB), run=10020-10035msec 00:27:58.394 15:04:38 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:58.394 15:04:38 -- target/dif.sh@43 -- # local sub 00:27:58.394 15:04:38 -- target/dif.sh@45 -- # for sub in "$@" 00:27:58.394 15:04:38 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:58.394 15:04:38 -- target/dif.sh@36 -- # local sub_id=0 00:27:58.394 15:04:38 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:58.394 15:04:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.394 15:04:38 -- common/autotest_common.sh@10 -- # set +x 00:27:58.394 15:04:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.394 15:04:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:58.394 15:04:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.394 15:04:39 -- common/autotest_common.sh@10 -- # set +x 00:27:58.394 15:04:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.394 15:04:39 -- target/dif.sh@45 -- # for sub in "$@" 00:27:58.394 15:04:39 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:58.394 15:04:39 -- target/dif.sh@36 -- # local sub_id=1 00:27:58.394 15:04:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:58.394 15:04:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.394 15:04:39 -- common/autotest_common.sh@10 -- # set +x 00:27:58.394 15:04:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.394 15:04:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:58.394 15:04:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.394 15:04:39 -- common/autotest_common.sh@10 -- # set +x 00:27:58.394 15:04:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.394 00:27:58.394 real 0m11.596s 00:27:58.394 user 0m37.120s 00:27:58.394 sys 0m0.891s 00:27:58.394 15:04:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:58.394 15:04:39 -- common/autotest_common.sh@10 -- # set +x 00:27:58.394 ************************************ 00:27:58.394 END TEST fio_dif_1_multi_subsystems 00:27:58.394 ************************************ 00:27:58.394 15:04:39 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:58.394 15:04:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:58.394 15:04:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:58.394 15:04:39 -- common/autotest_common.sh@10 -- # set +x 00:27:58.394 ************************************ 00:27:58.394 START TEST fio_dif_rand_params 00:27:58.394 ************************************ 00:27:58.394 15:04:39 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:27:58.394 15:04:39 -- target/dif.sh@100 -- # local NULL_DIF 00:27:58.394 15:04:39 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:58.394 15:04:39 -- target/dif.sh@103 -- # NULL_DIF=3 00:27:58.394 15:04:39 -- target/dif.sh@103 -- # bs=128k 00:27:58.394 15:04:39 -- target/dif.sh@103 -- # numjobs=3 00:27:58.394 15:04:39 -- target/dif.sh@103 -- # iodepth=3 00:27:58.394 15:04:39 -- target/dif.sh@103 -- # runtime=5 00:27:58.394 15:04:39 -- target/dif.sh@105 -- # create_subsystems 0 00:27:58.395 15:04:39 -- target/dif.sh@28 -- # local sub 00:27:58.395 15:04:39 -- target/dif.sh@30 -- # for sub in "$@" 00:27:58.395 15:04:39 -- target/dif.sh@31 -- # create_subsystem 0 00:27:58.395 15:04:39 -- target/dif.sh@18 -- # local sub_id=0 00:27:58.395 15:04:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:58.395 15:04:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.395 15:04:39 -- common/autotest_common.sh@10 -- # set +x 00:27:58.395 bdev_null0 00:27:58.395 15:04:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.395 15:04:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:58.395 15:04:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.395 15:04:39 -- common/autotest_common.sh@10 -- # set +x 00:27:58.395 15:04:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.395 15:04:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:58.395 15:04:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.395 15:04:39 -- common/autotest_common.sh@10 -- # set +x 00:27:58.395 15:04:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.395 15:04:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:58.395 15:04:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.395 15:04:39 -- common/autotest_common.sh@10 -- # set +x 00:27:58.395 [2024-04-26 15:04:39.256743] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:58.395 15:04:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.395 15:04:39 -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:58.395 15:04:39 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:58.395 15:04:39 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:58.395 15:04:39 -- nvmf/common.sh@521 -- # config=() 00:27:58.395 15:04:39 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:58.395 15:04:39 -- nvmf/common.sh@521 -- # local subsystem config 00:27:58.395 15:04:39 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:58.395 15:04:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:58.395 15:04:39 -- target/dif.sh@82 -- # gen_fio_conf 00:27:58.395 15:04:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:58.395 { 00:27:58.395 "params": { 00:27:58.395 "name": "Nvme$subsystem", 00:27:58.395 "trtype": "$TEST_TRANSPORT", 00:27:58.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.395 "adrfam": "ipv4", 00:27:58.395 "trsvcid": "$NVMF_PORT", 00:27:58.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.395 "hdgst": ${hdgst:-false}, 00:27:58.395 "ddgst": ${ddgst:-false} 00:27:58.395 }, 00:27:58.395 "method": "bdev_nvme_attach_controller" 00:27:58.395 } 00:27:58.395 EOF 00:27:58.395 )") 00:27:58.395 15:04:39 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:58.395 15:04:39 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:58.395 15:04:39 -- target/dif.sh@54 -- # local file 00:27:58.395 15:04:39 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:58.395 15:04:39 -- target/dif.sh@56 -- # cat 00:27:58.395 15:04:39 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:58.395 15:04:39 -- common/autotest_common.sh@1327 -- # shift 00:27:58.395 15:04:39 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:58.395 15:04:39 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:58.395 15:04:39 -- nvmf/common.sh@543 -- # cat 00:27:58.395 15:04:39 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:58.395 15:04:39 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:58.395 15:04:39 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:58.395 15:04:39 -- target/dif.sh@72 -- # (( file <= files )) 00:27:58.395 15:04:39 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:58.395 15:04:39 -- nvmf/common.sh@545 -- # jq . 00:27:58.395 15:04:39 -- nvmf/common.sh@546 -- # IFS=, 00:27:58.395 15:04:39 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:58.395 "params": { 00:27:58.395 "name": "Nvme0", 00:27:58.395 "trtype": "tcp", 00:27:58.395 "traddr": "10.0.0.2", 00:27:58.395 "adrfam": "ipv4", 00:27:58.395 "trsvcid": "4420", 00:27:58.395 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:58.395 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:58.395 "hdgst": false, 00:27:58.395 "ddgst": false 00:27:58.395 }, 00:27:58.395 "method": "bdev_nvme_attach_controller" 00:27:58.395 }' 00:27:58.395 15:04:39 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:58.395 15:04:39 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:58.395 15:04:39 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:58.395 15:04:39 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:58.395 15:04:39 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:58.395 15:04:39 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:58.395 15:04:39 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:58.395 15:04:39 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:58.395 15:04:39 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:58.395 15:04:39 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:58.395 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:58.395 ... 00:27:58.395 fio-3.35 00:27:58.395 Starting 3 threads 00:27:58.395 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.688 00:28:03.688 filename0: (groupid=0, jobs=1): err= 0: pid=1252261: Fri Apr 26 15:04:45 2024 00:28:03.688 read: IOPS=256, BW=32.0MiB/s (33.6MB/s)(160MiB/5009msec) 00:28:03.688 slat (nsec): min=3099, max=16178, avg=5930.28, stdev=605.77 00:28:03.688 clat (usec): min=5865, max=52603, avg=11703.19, stdev=6501.27 00:28:03.688 lat (usec): min=5871, max=52608, avg=11709.12, stdev=6501.27 00:28:03.688 clat percentiles (usec): 00:28:03.688 | 1.00th=[ 6456], 5.00th=[ 7439], 10.00th=[ 7701], 20.00th=[ 8848], 00:28:03.688 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[10683], 60.00th=[11338], 00:28:03.688 | 70.00th=[11994], 80.00th=[12911], 90.00th=[13829], 95.00th=[14615], 00:28:03.688 | 99.00th=[49546], 99.50th=[51119], 99.90th=[52691], 99.95th=[52691], 00:28:03.688 | 99.99th=[52691] 00:28:03.688 bw ( KiB/s): min=24576, max=37120, per=38.20%, avg=32768.00, stdev=3910.46, samples=10 00:28:03.689 iops : min= 192, max= 290, avg=256.00, stdev=30.55, samples=10 00:28:03.689 lat (msec) : 10=39.20%, 20=58.22%, 50=1.64%, 100=0.94% 00:28:03.689 cpu : usr=95.53%, sys=4.25%, ctx=21, majf=0, minf=135 00:28:03.689 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:03.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.689 issued rwts: total=1283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.689 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:03.689 filename0: (groupid=0, jobs=1): err= 0: pid=1252262: Fri Apr 26 15:04:45 2024 00:28:03.689 read: IOPS=201, BW=25.2MiB/s (26.4MB/s)(127MiB/5029msec) 00:28:03.689 slat (nsec): min=5343, max=30309, avg=7603.16, stdev=1720.95 00:28:03.689 clat (usec): min=5937, max=89614, avg=14879.77, stdev=11852.29 00:28:03.689 lat (usec): min=5945, max=89622, avg=14887.38, stdev=11852.29 00:28:03.689 clat percentiles (usec): 00:28:03.689 | 1.00th=[ 6783], 5.00th=[ 7439], 10.00th=[ 8029], 20.00th=[ 9503], 00:28:03.689 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11338], 60.00th=[12125], 00:28:03.689 | 70.00th=[13042], 80.00th=[14091], 90.00th=[16712], 95.00th=[50594], 00:28:03.689 | 99.00th=[52691], 99.50th=[53216], 99.90th=[53740], 99.95th=[89654], 00:28:03.689 | 99.99th=[89654] 00:28:03.689 bw ( KiB/s): min=17664, max=32000, per=30.14%, avg=25856.00, stdev=4407.70, samples=10 00:28:03.689 iops : min= 138, max= 250, avg=202.00, stdev=34.44, samples=10 00:28:03.689 lat (msec) : 10=28.23%, 20=62.39%, 50=3.75%, 100=5.63% 00:28:03.689 cpu : usr=96.36%, sys=3.42%, ctx=8, majf=0, minf=65 00:28:03.689 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:03.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.689 issued rwts: total=1013,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.689 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:03.689 filename0: (groupid=0, jobs=1): err= 0: pid=1252263: Fri Apr 26 15:04:45 2024 00:28:03.689 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(134MiB/5016msec) 00:28:03.689 slat (nsec): min=5322, max=30460, avg=8087.08, stdev=1471.28 00:28:03.689 clat (usec): min=6103, max=56951, avg=13998.90, stdev=9324.21 00:28:03.689 lat (usec): min=6109, max=56956, avg=14006.98, stdev=9324.17 00:28:03.689 clat percentiles (usec): 00:28:03.689 | 1.00th=[ 6587], 5.00th=[ 7963], 10.00th=[ 8717], 20.00th=[ 9765], 00:28:03.689 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11731], 60.00th=[12518], 00:28:03.689 | 70.00th=[13566], 80.00th=[14615], 90.00th=[16057], 95.00th=[47973], 00:28:03.689 | 99.00th=[52691], 99.50th=[53740], 99.90th=[56886], 99.95th=[56886], 00:28:03.689 | 99.99th=[56886] 00:28:03.689 bw ( KiB/s): min=18432, max=33024, per=31.97%, avg=27422.70, stdev=5189.58, samples=10 00:28:03.689 iops : min= 144, max= 258, avg=214.20, stdev=40.56, samples=10 00:28:03.689 lat (msec) : 10=22.91%, 20=71.51%, 50=1.58%, 100=4.00% 00:28:03.689 cpu : usr=96.23%, sys=3.53%, ctx=14, majf=0, minf=69 00:28:03.689 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:03.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.689 issued rwts: total=1074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.689 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:03.689 00:28:03.689 Run status group 0 (all jobs): 00:28:03.689 READ: bw=83.8MiB/s (87.8MB/s), 25.2MiB/s-32.0MiB/s (26.4MB/s-33.6MB/s), io=421MiB (442MB), run=5009-5029msec 00:28:03.689 15:04:45 -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:03.689 15:04:45 -- target/dif.sh@43 -- # local sub 00:28:03.689 15:04:45 -- target/dif.sh@45 -- # for sub in "$@" 00:28:03.689 15:04:45 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:03.689 15:04:45 -- target/dif.sh@36 -- # local sub_id=0 00:28:03.689 15:04:45 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:03.689 15:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.689 15:04:45 -- common/autotest_common.sh@10 -- # set +x 00:28:03.689 15:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.689 15:04:45 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:03.689 15:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.689 15:04:45 -- common/autotest_common.sh@10 -- # set +x 00:28:03.689 15:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.689 15:04:45 -- target/dif.sh@109 -- # NULL_DIF=2 00:28:03.689 15:04:45 -- target/dif.sh@109 -- # bs=4k 00:28:03.689 15:04:45 -- target/dif.sh@109 -- # numjobs=8 00:28:03.689 15:04:45 -- target/dif.sh@109 -- # iodepth=16 00:28:03.689 15:04:45 -- target/dif.sh@109 -- # runtime= 00:28:03.689 15:04:45 -- target/dif.sh@109 -- # files=2 00:28:03.689 15:04:45 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:03.689 15:04:45 -- target/dif.sh@28 -- # local sub 00:28:03.689 15:04:45 -- target/dif.sh@30 -- # for sub in "$@" 00:28:03.689 15:04:45 -- target/dif.sh@31 -- # create_subsystem 0 00:28:03.689 15:04:45 -- target/dif.sh@18 -- # local sub_id=0 00:28:03.689 15:04:45 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:03.689 15:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.689 15:04:45 -- common/autotest_common.sh@10 -- # set +x 00:28:03.689 bdev_null0 00:28:03.689 15:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.689 15:04:45 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:03.689 15:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.689 15:04:45 -- common/autotest_common.sh@10 -- # set +x 00:28:03.689 15:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.689 15:04:45 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:03.689 15:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.689 15:04:45 -- common/autotest_common.sh@10 -- # set +x 00:28:03.689 15:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.689 15:04:45 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:03.689 15:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.689 15:04:45 -- common/autotest_common.sh@10 -- # set +x 00:28:03.689 [2024-04-26 15:04:45.462730] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:03.689 15:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.689 15:04:45 -- target/dif.sh@30 -- # for sub in "$@" 00:28:03.689 15:04:45 -- target/dif.sh@31 -- # create_subsystem 1 00:28:03.689 15:04:45 -- target/dif.sh@18 -- # local sub_id=1 00:28:03.689 15:04:45 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:03.689 15:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.689 15:04:45 -- common/autotest_common.sh@10 -- # set +x 00:28:03.689 bdev_null1 00:28:03.689 15:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.689 15:04:45 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:03.689 15:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.689 15:04:45 -- common/autotest_common.sh@10 -- # set +x 00:28:03.689 15:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.689 15:04:45 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:03.689 15:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.689 15:04:45 -- common/autotest_common.sh@10 -- # set +x 00:28:03.689 15:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.689 15:04:45 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:03.689 15:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.689 15:04:45 -- common/autotest_common.sh@10 -- # set +x 00:28:03.689 15:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.689 15:04:45 -- target/dif.sh@30 -- # for sub in "$@" 00:28:03.689 15:04:45 -- target/dif.sh@31 -- # create_subsystem 2 00:28:03.689 15:04:45 -- target/dif.sh@18 -- # local sub_id=2 00:28:03.689 15:04:45 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:03.689 15:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.689 15:04:45 -- common/autotest_common.sh@10 -- # set +x 00:28:03.689 bdev_null2 00:28:03.689 15:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.689 15:04:45 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:03.689 15:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.689 15:04:45 -- common/autotest_common.sh@10 -- # set +x 00:28:03.689 15:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.689 15:04:45 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:03.689 15:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.689 15:04:45 -- common/autotest_common.sh@10 -- # set +x 00:28:03.689 15:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.689 15:04:45 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:03.689 15:04:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.689 15:04:45 -- common/autotest_common.sh@10 -- # set +x 00:28:03.689 15:04:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.689 15:04:45 -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:03.689 15:04:45 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:03.689 15:04:45 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:03.689 15:04:45 -- nvmf/common.sh@521 -- # config=() 00:28:03.689 15:04:45 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:03.689 15:04:45 -- nvmf/common.sh@521 -- # local subsystem config 00:28:03.689 15:04:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:03.689 15:04:45 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:03.689 15:04:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:03.689 { 00:28:03.689 "params": { 00:28:03.689 "name": "Nvme$subsystem", 00:28:03.689 "trtype": "$TEST_TRANSPORT", 00:28:03.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.689 "adrfam": "ipv4", 00:28:03.689 "trsvcid": "$NVMF_PORT", 00:28:03.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.689 "hdgst": ${hdgst:-false}, 00:28:03.689 "ddgst": ${ddgst:-false} 00:28:03.689 }, 00:28:03.689 "method": "bdev_nvme_attach_controller" 00:28:03.689 } 00:28:03.689 EOF 00:28:03.689 )") 00:28:03.689 15:04:45 -- target/dif.sh@82 -- # gen_fio_conf 00:28:03.689 15:04:45 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:03.690 15:04:45 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:03.690 15:04:45 -- target/dif.sh@54 -- # local file 00:28:03.690 15:04:45 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:03.690 15:04:45 -- target/dif.sh@56 -- # cat 00:28:03.690 15:04:45 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:03.690 15:04:45 -- common/autotest_common.sh@1327 -- # shift 00:28:03.690 15:04:45 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:03.690 15:04:45 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:03.690 15:04:45 -- nvmf/common.sh@543 -- # cat 00:28:03.690 15:04:45 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:03.690 15:04:45 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:03.690 15:04:45 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:03.690 15:04:45 -- target/dif.sh@72 -- # (( file <= files )) 00:28:03.690 15:04:45 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:03.690 15:04:45 -- target/dif.sh@73 -- # cat 00:28:03.690 15:04:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:03.690 15:04:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:03.690 { 00:28:03.690 "params": { 00:28:03.690 "name": "Nvme$subsystem", 00:28:03.690 "trtype": "$TEST_TRANSPORT", 00:28:03.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.690 "adrfam": "ipv4", 00:28:03.690 "trsvcid": "$NVMF_PORT", 00:28:03.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.690 "hdgst": ${hdgst:-false}, 00:28:03.690 "ddgst": ${ddgst:-false} 00:28:03.690 }, 00:28:03.690 "method": "bdev_nvme_attach_controller" 00:28:03.690 } 00:28:03.690 EOF 00:28:03.690 )") 00:28:03.690 15:04:45 -- target/dif.sh@72 -- # (( file++ )) 00:28:03.690 15:04:45 -- target/dif.sh@72 -- # (( file <= files )) 00:28:03.690 15:04:45 -- target/dif.sh@73 -- # cat 00:28:03.690 15:04:45 -- nvmf/common.sh@543 -- # cat 00:28:03.690 15:04:45 -- target/dif.sh@72 -- # (( file++ )) 00:28:03.690 15:04:45 -- target/dif.sh@72 -- # (( file <= files )) 00:28:03.690 15:04:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:03.690 15:04:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:03.690 { 00:28:03.690 "params": { 00:28:03.690 "name": "Nvme$subsystem", 00:28:03.690 "trtype": "$TEST_TRANSPORT", 00:28:03.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.690 "adrfam": "ipv4", 00:28:03.690 "trsvcid": "$NVMF_PORT", 00:28:03.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.690 "hdgst": ${hdgst:-false}, 00:28:03.690 "ddgst": ${ddgst:-false} 00:28:03.690 }, 00:28:03.690 "method": "bdev_nvme_attach_controller" 00:28:03.690 } 00:28:03.690 EOF 00:28:03.690 )") 00:28:03.690 15:04:45 -- nvmf/common.sh@543 -- # cat 00:28:03.690 15:04:45 -- nvmf/common.sh@545 -- # jq . 00:28:03.690 15:04:45 -- nvmf/common.sh@546 -- # IFS=, 00:28:03.690 15:04:45 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:28:03.690 "params": { 00:28:03.690 "name": "Nvme0", 00:28:03.690 "trtype": "tcp", 00:28:03.690 "traddr": "10.0.0.2", 00:28:03.690 "adrfam": "ipv4", 00:28:03.690 "trsvcid": "4420", 00:28:03.690 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:03.690 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:03.690 "hdgst": false, 00:28:03.690 "ddgst": false 00:28:03.690 }, 00:28:03.690 "method": "bdev_nvme_attach_controller" 00:28:03.690 },{ 00:28:03.690 "params": { 00:28:03.690 "name": "Nvme1", 00:28:03.690 "trtype": "tcp", 00:28:03.690 "traddr": "10.0.0.2", 00:28:03.690 "adrfam": "ipv4", 00:28:03.690 "trsvcid": "4420", 00:28:03.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:03.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:03.690 "hdgst": false, 00:28:03.690 "ddgst": false 00:28:03.690 }, 00:28:03.690 "method": "bdev_nvme_attach_controller" 00:28:03.690 },{ 00:28:03.690 "params": { 00:28:03.690 "name": "Nvme2", 00:28:03.690 "trtype": "tcp", 00:28:03.690 "traddr": "10.0.0.2", 00:28:03.690 "adrfam": "ipv4", 00:28:03.690 "trsvcid": "4420", 00:28:03.690 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:03.690 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:03.690 "hdgst": false, 00:28:03.690 "ddgst": false 00:28:03.690 }, 00:28:03.690 "method": "bdev_nvme_attach_controller" 00:28:03.690 }' 00:28:03.690 15:04:45 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:03.690 15:04:45 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:03.690 15:04:45 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:03.690 15:04:45 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:03.690 15:04:45 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:28:03.690 15:04:45 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:03.690 15:04:45 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:03.690 15:04:45 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:03.690 15:04:45 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:03.690 15:04:45 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:03.690 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:03.690 ... 00:28:03.690 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:03.690 ... 00:28:03.690 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:03.690 ... 00:28:03.690 fio-3.35 00:28:03.690 Starting 24 threads 00:28:03.690 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.952 00:28:15.952 filename0: (groupid=0, jobs=1): err= 0: pid=1253732: Fri Apr 26 15:04:56 2024 00:28:15.952 read: IOPS=493, BW=1976KiB/s (2023kB/s)(19.3MiB/10009msec) 00:28:15.952 slat (nsec): min=5500, max=51984, avg=10012.56, stdev=6362.83 00:28:15.952 clat (usec): min=6962, max=34761, avg=32305.68, stdev=2764.88 00:28:15.952 lat (usec): min=6979, max=34770, avg=32315.69, stdev=2764.42 00:28:15.952 clat percentiles (usec): 00:28:15.952 | 1.00th=[13566], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:28:15.952 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:28:15.952 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:28:15.952 | 99.00th=[33817], 99.50th=[33817], 99.90th=[34866], 99.95th=[34866], 00:28:15.952 | 99.99th=[34866] 00:28:15.952 bw ( KiB/s): min= 1920, max= 2304, per=4.18%, avg=1973.89, stdev=98.37, samples=19 00:28:15.952 iops : min= 480, max= 576, avg=493.47, stdev=24.59, samples=19 00:28:15.952 lat (msec) : 10=0.93%, 20=0.65%, 50=98.42% 00:28:15.952 cpu : usr=98.51%, sys=0.92%, ctx=115, majf=0, minf=53 00:28:15.952 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:15.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.952 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.952 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.952 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:15.952 filename0: (groupid=0, jobs=1): err= 0: pid=1253733: Fri Apr 26 15:04:56 2024 00:28:15.952 read: IOPS=493, BW=1973KiB/s (2020kB/s)(19.3MiB/10009msec) 00:28:15.952 slat (nsec): min=5493, max=71591, avg=6940.69, stdev=2865.41 00:28:15.952 clat (usec): min=12764, max=47243, avg=32383.10, stdev=2625.38 00:28:15.952 lat (usec): min=12771, max=47250, avg=32390.04, stdev=2625.15 00:28:15.952 clat percentiles (usec): 00:28:15.952 | 1.00th=[21890], 5.00th=[27657], 10.00th=[32375], 20.00th=[32375], 00:28:15.952 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:28:15.952 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:28:15.952 | 99.00th=[41157], 99.50th=[43254], 99.90th=[47449], 99.95th=[47449], 00:28:15.953 | 99.99th=[47449] 00:28:15.953 bw ( KiB/s): min= 1920, max= 2192, per=4.18%, avg=1970.53, stdev=80.58, samples=19 00:28:15.953 iops : min= 480, max= 548, avg=492.63, stdev=20.14, samples=19 00:28:15.953 lat (msec) : 20=0.49%, 50=99.51% 00:28:15.953 cpu : usr=98.93%, sys=0.73%, ctx=71, majf=0, minf=93 00:28:15.953 IO depths : 1=4.9%, 2=10.8%, 4=23.4%, 8=53.3%, 16=7.6%, 32=0.0%, >=64=0.0% 00:28:15.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.953 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.953 issued rwts: total=4936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.953 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:15.953 filename0: (groupid=0, jobs=1): err= 0: pid=1253734: Fri Apr 26 15:04:56 2024 00:28:15.953 read: IOPS=490, BW=1962KiB/s (2009kB/s)(19.2MiB/10014msec) 00:28:15.953 slat (nsec): min=5525, max=81998, avg=13078.91, stdev=9730.04 00:28:15.953 clat (usec): min=16853, max=40836, avg=32512.38, stdev=1303.32 00:28:15.953 lat (usec): min=16859, max=40843, avg=32525.46, stdev=1303.31 00:28:15.953 clat percentiles (usec): 00:28:15.953 | 1.00th=[24511], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:28:15.953 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:28:15.953 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:28:15.953 | 99.00th=[33817], 99.50th=[33817], 99.90th=[34341], 99.95th=[34866], 00:28:15.953 | 99.99th=[40633] 00:28:15.953 bw ( KiB/s): min= 1920, max= 2048, per=4.16%, avg=1960.42, stdev=61.13, samples=19 00:28:15.953 iops : min= 480, max= 512, avg=490.11, stdev=15.28, samples=19 00:28:15.953 lat (msec) : 20=0.37%, 50=99.63% 00:28:15.953 cpu : usr=98.39%, sys=1.00%, ctx=133, majf=0, minf=74 00:28:15.953 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:15.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.953 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.953 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.953 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:15.953 filename0: (groupid=0, jobs=1): err= 0: pid=1253735: Fri Apr 26 15:04:56 2024 00:28:15.953 read: IOPS=505, BW=2021KiB/s (2069kB/s)(19.8MiB/10020msec) 00:28:15.953 slat (nsec): min=2895, max=36327, avg=7101.39, stdev=2884.21 00:28:15.953 clat (usec): min=2775, max=34886, avg=31613.35, stdev=4149.57 00:28:15.953 lat (usec): min=2781, max=34893, avg=31620.45, stdev=4149.72 00:28:15.953 clat percentiles (usec): 00:28:15.953 | 1.00th=[10159], 5.00th=[22676], 10.00th=[32113], 20.00th=[32375], 00:28:15.953 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:28:15.953 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:28:15.953 | 99.00th=[33817], 99.50th=[33817], 99.90th=[34866], 99.95th=[34866], 00:28:15.953 | 99.99th=[34866] 00:28:15.953 bw ( KiB/s): min= 1920, max= 2592, per=4.28%, avg=2018.20, stdev=153.61, samples=20 00:28:15.953 iops : min= 480, max= 648, avg=504.55, stdev=38.40, samples=20 00:28:15.953 lat (msec) : 4=0.32%, 10=0.67%, 20=2.77%, 50=96.25% 00:28:15.953 cpu : usr=98.84%, sys=0.72%, ctx=104, majf=0, minf=113 00:28:15.953 IO depths : 1=3.0%, 2=9.2%, 4=24.7%, 8=53.6%, 16=9.5%, 32=0.0%, >=64=0.0% 00:28:15.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.953 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.953 issued rwts: total=5062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.953 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:15.953 filename0: (groupid=0, jobs=1): err= 0: pid=1253737: Fri Apr 26 15:04:56 2024 00:28:15.953 read: IOPS=508, BW=2033KiB/s (2082kB/s)(19.9MiB/10007msec) 00:28:15.953 slat (nsec): min=5496, max=66990, avg=13762.98, stdev=11028.28 00:28:15.953 clat (usec): min=5958, max=48670, avg=31372.28, stdev=4598.86 00:28:15.953 lat (usec): min=5974, max=48693, avg=31386.04, stdev=4600.64 00:28:15.953 clat percentiles (usec): 00:28:15.953 | 1.00th=[14091], 5.00th=[22938], 10.00th=[25035], 20.00th=[29492], 00:28:15.953 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:28:15.953 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[37487], 00:28:15.953 | 99.00th=[41681], 99.50th=[44303], 99.90th=[48497], 99.95th=[48497], 00:28:15.953 | 99.99th=[48497] 00:28:15.953 bw ( KiB/s): min= 1920, max= 2336, per=4.31%, avg=2033.05, stdev=116.27, samples=19 00:28:15.953 iops : min= 480, max= 584, avg=508.26, stdev=29.07, samples=19 00:28:15.953 lat (msec) : 10=0.31%, 20=1.99%, 50=97.70% 00:28:15.953 cpu : usr=99.04%, sys=0.69%, ctx=15, majf=0, minf=59 00:28:15.953 IO depths : 1=3.4%, 2=7.1%, 4=17.0%, 8=62.8%, 16=9.7%, 32=0.0%, >=64=0.0% 00:28:15.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.953 complete : 0=0.0%, 4=92.0%, 8=2.8%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.953 issued rwts: total=5086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.953 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:15.953 filename0: (groupid=0, jobs=1): err= 0: pid=1253738: Fri Apr 26 15:04:56 2024 00:28:15.953 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10006msec) 00:28:15.953 slat (nsec): min=5480, max=44601, avg=11078.53, stdev=6699.53 00:28:15.953 clat (usec): min=13782, max=50293, avg=32593.63, stdev=1924.46 00:28:15.953 lat (usec): min=13789, max=50311, avg=32604.71, stdev=1924.55 00:28:15.953 clat percentiles (usec): 00:28:15.953 | 1.00th=[24511], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:28:15.953 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:28:15.953 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:28:15.953 | 99.00th=[39584], 99.50th=[41157], 99.90th=[50070], 99.95th=[50070], 00:28:15.953 | 99.99th=[50070] 00:28:15.953 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1953.68, stdev=70.53, samples=19 00:28:15.953 iops : min= 448, max= 512, avg=488.42, stdev=17.63, samples=19 00:28:15.953 lat (msec) : 20=0.65%, 50=99.02%, 100=0.33% 00:28:15.953 cpu : usr=99.14%, sys=0.53%, ctx=80, majf=0, minf=50 00:28:15.953 IO depths : 1=5.3%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:28:15.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.953 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.953 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.953 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:15.953 filename0: (groupid=0, jobs=1): err= 0: pid=1253739: Fri Apr 26 15:04:56 2024 00:28:15.953 read: IOPS=489, BW=1958KiB/s (2005kB/s)(19.1MiB/10004msec) 00:28:15.953 slat (nsec): min=5595, max=56955, avg=14299.53, stdev=8773.53 00:28:15.953 clat (usec): min=14769, max=38394, avg=32567.87, stdev=1198.63 00:28:15.953 lat (usec): min=14774, max=38412, avg=32582.17, stdev=1198.55 00:28:15.953 clat percentiles (usec): 00:28:15.953 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:28:15.953 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:28:15.953 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:28:15.953 | 99.00th=[33817], 99.50th=[34866], 99.90th=[38536], 99.95th=[38536], 00:28:15.953 | 99.99th=[38536] 00:28:15.953 bw ( KiB/s): min= 1920, max= 2048, per=4.14%, avg=1953.68, stdev=57.91, samples=19 00:28:15.953 iops : min= 480, max= 512, avg=488.42, stdev=14.48, samples=19 00:28:15.953 lat (msec) : 20=0.33%, 50=99.67% 00:28:15.953 cpu : usr=99.01%, sys=0.70%, ctx=51, majf=0, minf=52 00:28:15.953 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:15.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.953 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.953 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.953 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:15.953 filename0: (groupid=0, jobs=1): err= 0: pid=1253740: Fri Apr 26 15:04:56 2024 00:28:15.953 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10009msec) 00:28:15.953 slat (nsec): min=5935, max=91133, avg=27303.13, stdev=15168.68 00:28:15.953 clat (usec): min=15347, max=44147, avg=32479.89, stdev=860.63 00:28:15.953 lat (usec): min=15355, max=44167, avg=32507.19, stdev=859.71 00:28:15.953 clat percentiles (usec): 00:28:15.953 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32113], 20.00th=[32113], 00:28:15.953 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:28:15.953 | 70.00th=[32637], 80.00th=[32900], 90.00th=[32900], 95.00th=[33162], 00:28:15.953 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34341], 99.95th=[36963], 00:28:15.953 | 99.99th=[44303] 00:28:15.953 bw ( KiB/s): min= 1920, max= 2048, per=4.14%, avg=1953.68, stdev=57.91, samples=19 00:28:15.953 iops : min= 480, max= 512, avg=488.42, stdev=14.48, samples=19 00:28:15.953 lat (msec) : 20=0.04%, 50=99.96% 00:28:15.953 cpu : usr=98.27%, sys=0.99%, ctx=432, majf=0, minf=69 00:28:15.953 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:15.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.953 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.953 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.953 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:15.953 filename1: (groupid=0, jobs=1): err= 0: pid=1253741: Fri Apr 26 15:04:56 2024 00:28:15.953 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10005msec) 00:28:15.953 slat (nsec): min=5489, max=97589, avg=26302.97, stdev=15895.01 00:28:15.953 clat (usec): min=9821, max=55179, avg=32435.32, stdev=2035.19 00:28:15.953 lat (usec): min=9827, max=55196, avg=32461.62, stdev=2035.77 00:28:15.953 clat percentiles (usec): 00:28:15.953 | 1.00th=[30802], 5.00th=[32113], 10.00th=[32113], 20.00th=[32113], 00:28:15.953 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:28:15.953 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:28:15.953 | 99.00th=[33817], 99.50th=[34341], 99.90th=[55313], 99.95th=[55313], 00:28:15.953 | 99.99th=[55313] 00:28:15.953 bw ( KiB/s): min= 1795, max= 2048, per=4.13%, avg=1946.89, stdev=68.24, samples=19 00:28:15.953 iops : min= 448, max= 512, avg=486.68, stdev=17.15, samples=19 00:28:15.953 lat (msec) : 10=0.12%, 20=0.53%, 50=99.02%, 100=0.33% 00:28:15.953 cpu : usr=99.15%, sys=0.55%, ctx=12, majf=0, minf=73 00:28:15.953 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:15.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.953 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.953 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.953 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:15.954 filename1: (groupid=0, jobs=1): err= 0: pid=1253742: Fri Apr 26 15:04:56 2024 00:28:15.954 read: IOPS=484, BW=1940KiB/s (1986kB/s)(19.0MiB/10005msec) 00:28:15.954 slat (nsec): min=5497, max=93928, avg=21368.66, stdev=16703.27 00:28:15.954 clat (usec): min=9804, max=55463, avg=32808.66, stdev=3865.44 00:28:15.954 lat (usec): min=9810, max=55479, avg=32830.03, stdev=3865.21 00:28:15.954 clat percentiles (usec): 00:28:15.954 | 1.00th=[23200], 5.00th=[26084], 10.00th=[31589], 20.00th=[32113], 00:28:15.954 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:28:15.954 | 70.00th=[32900], 80.00th=[33162], 90.00th=[35390], 95.00th=[40109], 00:28:15.954 | 99.00th=[49021], 99.50th=[49021], 99.90th=[55313], 99.95th=[55313], 00:28:15.954 | 99.99th=[55313] 00:28:15.954 bw ( KiB/s): min= 1664, max= 2048, per=4.09%, avg=1927.37, stdev=88.08, samples=19 00:28:15.954 iops : min= 416, max= 512, avg=481.84, stdev=22.02, samples=19 00:28:15.954 lat (msec) : 10=0.08%, 20=0.45%, 50=99.05%, 100=0.41% 00:28:15.954 cpu : usr=99.15%, sys=0.56%, ctx=10, majf=0, minf=51 00:28:15.954 IO depths : 1=3.5%, 2=6.9%, 4=14.7%, 8=64.0%, 16=10.8%, 32=0.0%, >=64=0.0% 00:28:15.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.954 complete : 0=0.0%, 4=91.7%, 8=4.4%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.954 issued rwts: total=4852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:15.954 filename1: (groupid=0, jobs=1): err= 0: pid=1253743: Fri Apr 26 15:04:56 2024 00:28:15.954 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10009msec) 00:28:15.954 slat (nsec): min=5525, max=96257, avg=14976.33, stdev=13411.76 00:28:15.954 clat (usec): min=22226, max=41173, avg=32604.64, stdev=803.71 00:28:15.954 lat (usec): min=22235, max=41201, avg=32619.61, stdev=801.92 00:28:15.954 clat percentiles (usec): 00:28:15.954 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:28:15.954 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:28:15.954 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:28:15.954 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34341], 99.95th=[34866], 00:28:15.954 | 99.99th=[41157] 00:28:15.954 bw ( KiB/s): min= 1920, max= 2048, per=4.14%, avg=1953.68, stdev=54.36, samples=19 00:28:15.954 iops : min= 480, max= 512, avg=488.42, stdev=13.59, samples=19 00:28:15.954 lat (msec) : 50=100.00% 00:28:15.954 cpu : usr=99.17%, sys=0.55%, ctx=11, majf=0, minf=61 00:28:15.954 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:28:15.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.954 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.954 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:15.954 filename1: (groupid=0, jobs=1): err= 0: pid=1253744: Fri Apr 26 15:04:56 2024 00:28:15.954 read: IOPS=489, BW=1958KiB/s (2005kB/s)(19.1MiB/10004msec) 00:28:15.954 slat (nsec): min=5502, max=44471, avg=12398.90, stdev=7834.56 00:28:15.954 clat (usec): min=13730, max=51039, avg=32586.34, stdev=2193.74 00:28:15.954 lat (usec): min=13736, max=51054, avg=32598.73, stdev=2193.80 00:28:15.954 clat percentiles (usec): 00:28:15.954 | 1.00th=[21365], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:28:15.954 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:28:15.954 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:28:15.954 | 99.00th=[41157], 99.50th=[44303], 99.90th=[51119], 99.95th=[51119], 00:28:15.954 | 99.99th=[51119] 00:28:15.954 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1952.84, stdev=69.43, samples=19 00:28:15.954 iops : min= 448, max= 512, avg=488.21, stdev=17.36, samples=19 00:28:15.954 lat (msec) : 20=0.65%, 50=99.02%, 100=0.33% 00:28:15.954 cpu : usr=99.18%, sys=0.53%, ctx=11, majf=0, minf=71 00:28:15.954 IO depths : 1=4.9%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.6%, 32=0.0%, >=64=0.0% 00:28:15.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.954 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.954 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:15.954 filename1: (groupid=0, jobs=1): err= 0: pid=1253746: Fri Apr 26 15:04:56 2024 00:28:15.954 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10009msec) 00:28:15.954 slat (nsec): min=5755, max=97880, avg=26823.86, stdev=14922.88 00:28:15.954 clat (usec): min=22275, max=34589, avg=32481.71, stdev=761.09 00:28:15.954 lat (usec): min=22306, max=34602, avg=32508.53, stdev=760.01 00:28:15.954 clat percentiles (usec): 00:28:15.954 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32113], 20.00th=[32113], 00:28:15.954 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:28:15.954 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33162], 00:28:15.954 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:28:15.954 | 99.99th=[34341] 00:28:15.954 bw ( KiB/s): min= 1920, max= 2048, per=4.14%, avg=1953.68, stdev=57.91, samples=19 00:28:15.954 iops : min= 480, max= 512, avg=488.42, stdev=14.48, samples=19 00:28:15.954 lat (msec) : 50=100.00% 00:28:15.954 cpu : usr=99.31%, sys=0.41%, ctx=11, majf=0, minf=69 00:28:15.954 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:15.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.954 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.954 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:15.954 filename1: (groupid=0, jobs=1): err= 0: pid=1253747: Fri Apr 26 15:04:56 2024 00:28:15.954 read: IOPS=492, BW=1970KiB/s (2017kB/s)(19.2MiB/10008msec) 00:28:15.954 slat (nsec): min=5498, max=60932, avg=14355.20, stdev=9259.67 00:28:15.954 clat (usec): min=5370, max=34802, avg=32353.24, stdev=2247.33 00:28:15.954 lat (usec): min=5385, max=34808, avg=32367.60, stdev=2247.49 00:28:15.954 clat percentiles (usec): 00:28:15.954 | 1.00th=[17695], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:28:15.954 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:28:15.954 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:28:15.954 | 99.00th=[33817], 99.50th=[33817], 99.90th=[34866], 99.95th=[34866], 00:28:15.954 | 99.99th=[34866] 00:28:15.954 bw ( KiB/s): min= 1920, max= 2180, per=4.17%, avg=1967.37, stdev=77.06, samples=19 00:28:15.954 iops : min= 480, max= 545, avg=491.84, stdev=19.27, samples=19 00:28:15.954 lat (msec) : 10=0.32%, 20=0.97%, 50=98.70% 00:28:15.954 cpu : usr=99.15%, sys=0.57%, ctx=13, majf=0, minf=41 00:28:15.954 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:15.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.954 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.954 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:15.954 filename1: (groupid=0, jobs=1): err= 0: pid=1253748: Fri Apr 26 15:04:56 2024 00:28:15.954 read: IOPS=489, BW=1956KiB/s (2003kB/s)(19.1MiB/10005msec) 00:28:15.954 slat (nsec): min=5416, max=85353, avg=22561.41, stdev=13665.44 00:28:15.954 clat (usec): min=8101, max=55188, avg=32526.21, stdev=2263.19 00:28:15.954 lat (usec): min=8121, max=55204, avg=32548.77, stdev=2263.44 00:28:15.954 clat percentiles (usec): 00:28:15.954 | 1.00th=[26608], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:28:15.954 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:28:15.954 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:28:15.954 | 99.00th=[37487], 99.50th=[39584], 99.90th=[55313], 99.95th=[55313], 00:28:15.954 | 99.99th=[55313] 00:28:15.954 bw ( KiB/s): min= 1795, max= 2048, per=4.13%, avg=1946.89, stdev=65.69, samples=19 00:28:15.954 iops : min= 448, max= 512, avg=486.68, stdev=16.52, samples=19 00:28:15.954 lat (msec) : 10=0.29%, 20=0.31%, 50=99.08%, 100=0.33% 00:28:15.954 cpu : usr=99.06%, sys=0.65%, ctx=20, majf=0, minf=67 00:28:15.954 IO depths : 1=3.6%, 2=8.1%, 4=19.7%, 8=58.5%, 16=10.0%, 32=0.0%, >=64=0.0% 00:28:15.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.954 complete : 0=0.0%, 4=93.2%, 8=2.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.954 issued rwts: total=4893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:15.954 filename1: (groupid=0, jobs=1): err= 0: pid=1253749: Fri Apr 26 15:04:56 2024 00:28:15.954 read: IOPS=488, BW=1955KiB/s (2002kB/s)(19.1MiB/10015msec) 00:28:15.954 slat (usec): min=5, max=102, avg=22.27, stdev=12.28 00:28:15.954 clat (usec): min=16471, max=51444, avg=32541.64, stdev=1255.64 00:28:15.954 lat (usec): min=16477, max=51470, avg=32563.91, stdev=1255.59 00:28:15.954 clat percentiles (usec): 00:28:15.954 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:28:15.954 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:28:15.954 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33162], 00:28:15.954 | 99.00th=[34341], 99.50th=[34341], 99.90th=[43254], 99.95th=[43254], 00:28:15.954 | 99.99th=[51643] 00:28:15.954 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1953.68, stdev=71.93, samples=19 00:28:15.954 iops : min= 448, max= 512, avg=488.42, stdev=17.98, samples=19 00:28:15.954 lat (msec) : 20=0.33%, 50=99.63%, 100=0.04% 00:28:15.954 cpu : usr=99.16%, sys=0.55%, ctx=15, majf=0, minf=60 00:28:15.954 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:15.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.954 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.955 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:15.955 filename2: (groupid=0, jobs=1): err= 0: pid=1253750: Fri Apr 26 15:04:56 2024 00:28:15.955 read: IOPS=503, BW=2014KiB/s (2062kB/s)(19.7MiB/10011msec) 00:28:15.955 slat (nsec): min=5502, max=48047, avg=9002.03, stdev=4340.75 00:28:15.955 clat (usec): min=4651, max=40180, avg=31703.48, stdev=4074.07 00:28:15.955 lat (usec): min=4664, max=40189, avg=31712.48, stdev=4073.61 00:28:15.955 clat percentiles (usec): 00:28:15.955 | 1.00th=[ 7242], 5.00th=[25560], 10.00th=[32113], 20.00th=[32375], 00:28:15.955 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:28:15.955 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:28:15.955 | 99.00th=[33817], 99.50th=[35914], 99.90th=[36963], 99.95th=[39584], 00:28:15.955 | 99.99th=[40109] 00:28:15.955 bw ( KiB/s): min= 1916, max= 2666, per=4.27%, avg=2013.79, stdev=176.65, samples=19 00:28:15.955 iops : min= 479, max= 666, avg=503.42, stdev=44.06, samples=19 00:28:15.955 lat (msec) : 10=1.27%, 20=2.00%, 50=96.73% 00:28:15.955 cpu : usr=98.78%, sys=0.75%, ctx=134, majf=0, minf=69 00:28:15.955 IO depths : 1=5.8%, 2=11.6%, 4=23.5%, 8=52.3%, 16=6.9%, 32=0.0%, >=64=0.0% 00:28:15.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.955 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.955 issued rwts: total=5040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:15.955 filename2: (groupid=0, jobs=1): err= 0: pid=1253751: Fri Apr 26 15:04:56 2024 00:28:15.955 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10007msec) 00:28:15.955 slat (nsec): min=5407, max=43394, avg=11773.47, stdev=6957.05 00:28:15.955 clat (usec): min=11888, max=57717, avg=32599.28, stdev=2185.27 00:28:15.955 lat (usec): min=11915, max=57731, avg=32611.06, stdev=2185.29 00:28:15.955 clat percentiles (usec): 00:28:15.955 | 1.00th=[24249], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:28:15.955 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:28:15.955 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:28:15.955 | 99.00th=[41157], 99.50th=[44303], 99.90th=[51119], 99.95th=[51119], 00:28:15.955 | 99.99th=[57934] 00:28:15.955 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1953.68, stdev=71.93, samples=19 00:28:15.955 iops : min= 448, max= 512, avg=488.42, stdev=17.98, samples=19 00:28:15.955 lat (msec) : 20=0.33%, 50=99.35%, 100=0.33% 00:28:15.955 cpu : usr=99.17%, sys=0.54%, ctx=30, majf=0, minf=46 00:28:15.955 IO depths : 1=5.2%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:28:15.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.955 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.955 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:15.955 filename2: (groupid=0, jobs=1): err= 0: pid=1253752: Fri Apr 26 15:04:56 2024 00:28:15.955 read: IOPS=490, BW=1962KiB/s (2009kB/s)(19.2MiB/10014msec) 00:28:15.955 slat (nsec): min=5685, max=97344, avg=22380.42, stdev=15598.86 00:28:15.955 clat (usec): min=17744, max=34217, avg=32438.38, stdev=1285.79 00:28:15.955 lat (usec): min=17752, max=34226, avg=32460.76, stdev=1285.48 00:28:15.955 clat percentiles (usec): 00:28:15.955 | 1.00th=[26346], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:28:15.955 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:28:15.955 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:28:15.955 | 99.00th=[33817], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:28:15.955 | 99.99th=[34341] 00:28:15.955 bw ( KiB/s): min= 1920, max= 2048, per=4.16%, avg=1960.42, stdev=61.13, samples=19 00:28:15.955 iops : min= 480, max= 512, avg=490.11, stdev=15.28, samples=19 00:28:15.955 lat (msec) : 20=0.33%, 50=99.67% 00:28:15.955 cpu : usr=98.97%, sys=0.68%, ctx=59, majf=0, minf=43 00:28:15.955 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:15.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.955 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.955 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:15.955 filename2: (groupid=0, jobs=1): err= 0: pid=1253753: Fri Apr 26 15:04:56 2024 00:28:15.955 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10009msec) 00:28:15.955 slat (usec): min=5, max=109, avg=30.52, stdev=17.45 00:28:15.955 clat (usec): min=19954, max=39735, avg=32462.94, stdev=914.95 00:28:15.955 lat (usec): min=19983, max=39775, avg=32493.45, stdev=913.83 00:28:15.955 clat percentiles (usec): 00:28:15.955 | 1.00th=[28181], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:28:15.955 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:28:15.955 | 70.00th=[32637], 80.00th=[32900], 90.00th=[32900], 95.00th=[33162], 00:28:15.955 | 99.00th=[34341], 99.50th=[34341], 99.90th=[38536], 99.95th=[39584], 00:28:15.955 | 99.99th=[39584] 00:28:15.955 bw ( KiB/s): min= 1920, max= 2048, per=4.14%, avg=1953.68, stdev=54.36, samples=19 00:28:15.955 iops : min= 480, max= 512, avg=488.42, stdev=13.59, samples=19 00:28:15.955 lat (msec) : 20=0.04%, 50=99.96% 00:28:15.955 cpu : usr=98.57%, sys=0.85%, ctx=57, majf=0, minf=56 00:28:15.955 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:28:15.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.955 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.955 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:15.955 filename2: (groupid=0, jobs=1): err= 0: pid=1253755: Fri Apr 26 15:04:56 2024 00:28:15.955 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10005msec) 00:28:15.955 slat (usec): min=5, max=103, avg=24.60, stdev=15.26 00:28:15.955 clat (usec): min=11893, max=55697, avg=32447.49, stdev=2050.14 00:28:15.955 lat (usec): min=11899, max=55714, avg=32472.09, stdev=2050.53 00:28:15.955 clat percentiles (usec): 00:28:15.955 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32113], 20.00th=[32113], 00:28:15.955 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:28:15.955 | 70.00th=[32637], 80.00th=[32900], 90.00th=[32900], 95.00th=[33162], 00:28:15.955 | 99.00th=[34341], 99.50th=[34341], 99.90th=[55837], 99.95th=[55837], 00:28:15.955 | 99.99th=[55837] 00:28:15.955 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1946.74, stdev=68.61, samples=19 00:28:15.955 iops : min= 448, max= 512, avg=486.68, stdev=17.15, samples=19 00:28:15.955 lat (msec) : 20=0.65%, 50=99.02%, 100=0.33% 00:28:15.955 cpu : usr=99.16%, sys=0.54%, ctx=70, majf=0, minf=44 00:28:15.955 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:15.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.955 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.955 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:15.955 filename2: (groupid=0, jobs=1): err= 0: pid=1253756: Fri Apr 26 15:04:56 2024 00:28:15.955 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10006msec) 00:28:15.955 slat (nsec): min=5668, max=91289, avg=24024.95, stdev=13721.75 00:28:15.955 clat (usec): min=11869, max=56557, avg=32469.17, stdev=2081.81 00:28:15.955 lat (usec): min=11876, max=56572, avg=32493.19, stdev=2082.08 00:28:15.955 clat percentiles (usec): 00:28:15.955 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:28:15.955 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:28:15.955 | 70.00th=[32637], 80.00th=[32900], 90.00th=[32900], 95.00th=[33162], 00:28:15.955 | 99.00th=[33817], 99.50th=[34341], 99.90th=[56361], 99.95th=[56361], 00:28:15.955 | 99.99th=[56361] 00:28:15.955 bw ( KiB/s): min= 1795, max= 2048, per=4.13%, avg=1947.11, stdev=68.14, samples=19 00:28:15.955 iops : min= 448, max= 512, avg=486.74, stdev=17.13, samples=19 00:28:15.955 lat (msec) : 20=0.65%, 50=99.02%, 100=0.33% 00:28:15.955 cpu : usr=99.02%, sys=0.62%, ctx=106, majf=0, minf=59 00:28:15.955 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:15.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.955 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.955 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:15.955 filename2: (groupid=0, jobs=1): err= 0: pid=1253757: Fri Apr 26 15:04:56 2024 00:28:15.955 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10009msec) 00:28:15.955 slat (nsec): min=5520, max=95944, avg=20434.34, stdev=15144.76 00:28:15.955 clat (usec): min=20009, max=34600, avg=32551.19, stdev=775.14 00:28:15.955 lat (usec): min=20030, max=34626, avg=32571.63, stdev=773.21 00:28:15.955 clat percentiles (usec): 00:28:15.955 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:28:15.955 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:28:15.955 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33162], 00:28:15.955 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:28:15.955 | 99.99th=[34341] 00:28:15.955 bw ( KiB/s): min= 1920, max= 2048, per=4.14%, avg=1953.68, stdev=57.91, samples=19 00:28:15.955 iops : min= 480, max= 512, avg=488.42, stdev=14.48, samples=19 00:28:15.955 lat (msec) : 50=100.00% 00:28:15.955 cpu : usr=98.56%, sys=0.87%, ctx=71, majf=0, minf=46 00:28:15.955 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:15.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.955 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.955 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:15.955 filename2: (groupid=0, jobs=1): err= 0: pid=1253758: Fri Apr 26 15:04:56 2024 00:28:15.955 read: IOPS=490, BW=1963KiB/s (2010kB/s)(19.2MiB/10010msec) 00:28:15.955 slat (nsec): min=5617, max=56617, avg=16326.92, stdev=9372.60 00:28:15.955 clat (usec): min=9537, max=53053, avg=32451.86, stdev=1824.81 00:28:15.955 lat (usec): min=9543, max=53073, avg=32468.18, stdev=1825.51 00:28:15.955 clat percentiles (usec): 00:28:15.955 | 1.00th=[26608], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:28:15.955 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:28:15.955 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:28:15.955 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:28:15.956 | 99.99th=[53216] 00:28:15.956 bw ( KiB/s): min= 1920, max= 2048, per=4.14%, avg=1953.68, stdev=57.91, samples=19 00:28:15.956 iops : min= 480, max= 512, avg=488.42, stdev=14.48, samples=19 00:28:15.956 lat (msec) : 10=0.33%, 20=0.37%, 50=99.27%, 100=0.04% 00:28:15.956 cpu : usr=99.05%, sys=0.67%, ctx=21, majf=0, minf=50 00:28:15.956 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:15.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.956 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.956 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:15.956 00:28:15.956 Run status group 0 (all jobs): 00:28:15.956 READ: bw=46.0MiB/s (48.3MB/s), 1940KiB/s-2033KiB/s (1986kB/s-2082kB/s), io=461MiB (484MB), run=10004-10020msec 00:28:15.956 15:04:57 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:15.956 15:04:57 -- target/dif.sh@43 -- # local sub 00:28:15.956 15:04:57 -- target/dif.sh@45 -- # for sub in "$@" 00:28:15.956 15:04:57 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:15.956 15:04:57 -- target/dif.sh@36 -- # local sub_id=0 00:28:15.956 15:04:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:15.956 15:04:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.956 15:04:57 -- common/autotest_common.sh@10 -- # set +x 00:28:15.956 15:04:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.956 15:04:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:15.956 15:04:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.956 15:04:57 -- common/autotest_common.sh@10 -- # set +x 00:28:15.956 15:04:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.956 15:04:57 -- target/dif.sh@45 -- # for sub in "$@" 00:28:15.956 15:04:57 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:15.956 15:04:57 -- target/dif.sh@36 -- # local sub_id=1 00:28:15.956 15:04:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:15.956 15:04:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.956 15:04:57 -- common/autotest_common.sh@10 -- # set +x 00:28:15.956 15:04:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.956 15:04:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:15.956 15:04:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.956 15:04:57 -- common/autotest_common.sh@10 -- # set +x 00:28:15.956 15:04:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.956 15:04:57 -- target/dif.sh@45 -- # for sub in "$@" 00:28:15.956 15:04:57 -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:15.956 15:04:57 -- target/dif.sh@36 -- # local sub_id=2 00:28:15.956 15:04:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:15.956 15:04:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.956 15:04:57 -- common/autotest_common.sh@10 -- # set +x 00:28:15.956 15:04:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.956 15:04:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:15.956 15:04:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.956 15:04:57 -- common/autotest_common.sh@10 -- # set +x 00:28:15.956 15:04:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.956 15:04:57 -- target/dif.sh@115 -- # NULL_DIF=1 00:28:15.956 15:04:57 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:15.956 15:04:57 -- target/dif.sh@115 -- # numjobs=2 00:28:15.956 15:04:57 -- target/dif.sh@115 -- # iodepth=8 00:28:15.956 15:04:57 -- target/dif.sh@115 -- # runtime=5 00:28:15.956 15:04:57 -- target/dif.sh@115 -- # files=1 00:28:15.956 15:04:57 -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:15.956 15:04:57 -- target/dif.sh@28 -- # local sub 00:28:15.956 15:04:57 -- target/dif.sh@30 -- # for sub in "$@" 00:28:15.956 15:04:57 -- target/dif.sh@31 -- # create_subsystem 0 00:28:15.956 15:04:57 -- target/dif.sh@18 -- # local sub_id=0 00:28:15.956 15:04:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:15.956 15:04:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.956 15:04:57 -- common/autotest_common.sh@10 -- # set +x 00:28:15.956 bdev_null0 00:28:15.956 15:04:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.956 15:04:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:15.956 15:04:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.956 15:04:57 -- common/autotest_common.sh@10 -- # set +x 00:28:15.956 15:04:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.956 15:04:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:15.956 15:04:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.956 15:04:57 -- common/autotest_common.sh@10 -- # set +x 00:28:15.956 15:04:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.956 15:04:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:15.956 15:04:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.956 15:04:57 -- common/autotest_common.sh@10 -- # set +x 00:28:15.956 [2024-04-26 15:04:57.150627] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:15.956 15:04:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.956 15:04:57 -- target/dif.sh@30 -- # for sub in "$@" 00:28:15.956 15:04:57 -- target/dif.sh@31 -- # create_subsystem 1 00:28:15.956 15:04:57 -- target/dif.sh@18 -- # local sub_id=1 00:28:15.956 15:04:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:15.956 15:04:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.956 15:04:57 -- common/autotest_common.sh@10 -- # set +x 00:28:15.956 bdev_null1 00:28:15.956 15:04:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.956 15:04:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:15.956 15:04:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.956 15:04:57 -- common/autotest_common.sh@10 -- # set +x 00:28:15.956 15:04:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.956 15:04:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:15.956 15:04:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.956 15:04:57 -- common/autotest_common.sh@10 -- # set +x 00:28:15.956 15:04:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.956 15:04:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:15.956 15:04:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.956 15:04:57 -- common/autotest_common.sh@10 -- # set +x 00:28:15.956 15:04:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.956 15:04:57 -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:15.956 15:04:57 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:15.956 15:04:57 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:15.956 15:04:57 -- nvmf/common.sh@521 -- # config=() 00:28:15.956 15:04:57 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:15.956 15:04:57 -- nvmf/common.sh@521 -- # local subsystem config 00:28:15.956 15:04:57 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:15.956 15:04:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:15.956 15:04:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:15.956 { 00:28:15.956 "params": { 00:28:15.956 "name": "Nvme$subsystem", 00:28:15.956 "trtype": "$TEST_TRANSPORT", 00:28:15.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.956 "adrfam": "ipv4", 00:28:15.956 "trsvcid": "$NVMF_PORT", 00:28:15.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.956 "hdgst": ${hdgst:-false}, 00:28:15.956 "ddgst": ${ddgst:-false} 00:28:15.956 }, 00:28:15.956 "method": "bdev_nvme_attach_controller" 00:28:15.956 } 00:28:15.956 EOF 00:28:15.956 )") 00:28:15.956 15:04:57 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:15.956 15:04:57 -- target/dif.sh@82 -- # gen_fio_conf 00:28:15.956 15:04:57 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:15.956 15:04:57 -- target/dif.sh@54 -- # local file 00:28:15.956 15:04:57 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:15.956 15:04:57 -- target/dif.sh@56 -- # cat 00:28:15.956 15:04:57 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:15.956 15:04:57 -- common/autotest_common.sh@1327 -- # shift 00:28:15.956 15:04:57 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:15.956 15:04:57 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:15.956 15:04:57 -- nvmf/common.sh@543 -- # cat 00:28:15.956 15:04:57 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:15.956 15:04:57 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:15.956 15:04:57 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:15.956 15:04:57 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:15.956 15:04:57 -- target/dif.sh@72 -- # (( file <= files )) 00:28:15.956 15:04:57 -- target/dif.sh@73 -- # cat 00:28:15.956 15:04:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:15.956 15:04:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:15.956 { 00:28:15.956 "params": { 00:28:15.956 "name": "Nvme$subsystem", 00:28:15.956 "trtype": "$TEST_TRANSPORT", 00:28:15.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.956 "adrfam": "ipv4", 00:28:15.956 "trsvcid": "$NVMF_PORT", 00:28:15.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.956 "hdgst": ${hdgst:-false}, 00:28:15.956 "ddgst": ${ddgst:-false} 00:28:15.956 }, 00:28:15.956 "method": "bdev_nvme_attach_controller" 00:28:15.956 } 00:28:15.956 EOF 00:28:15.956 )") 00:28:15.956 15:04:57 -- target/dif.sh@72 -- # (( file++ )) 00:28:15.956 15:04:57 -- target/dif.sh@72 -- # (( file <= files )) 00:28:15.956 15:04:57 -- nvmf/common.sh@543 -- # cat 00:28:15.956 15:04:57 -- nvmf/common.sh@545 -- # jq . 00:28:15.956 15:04:57 -- nvmf/common.sh@546 -- # IFS=, 00:28:15.956 15:04:57 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:28:15.956 "params": { 00:28:15.956 "name": "Nvme0", 00:28:15.956 "trtype": "tcp", 00:28:15.956 "traddr": "10.0.0.2", 00:28:15.956 "adrfam": "ipv4", 00:28:15.956 "trsvcid": "4420", 00:28:15.956 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:15.957 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:15.957 "hdgst": false, 00:28:15.957 "ddgst": false 00:28:15.957 }, 00:28:15.957 "method": "bdev_nvme_attach_controller" 00:28:15.957 },{ 00:28:15.957 "params": { 00:28:15.957 "name": "Nvme1", 00:28:15.957 "trtype": "tcp", 00:28:15.957 "traddr": "10.0.0.2", 00:28:15.957 "adrfam": "ipv4", 00:28:15.957 "trsvcid": "4420", 00:28:15.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:15.957 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:15.957 "hdgst": false, 00:28:15.957 "ddgst": false 00:28:15.957 }, 00:28:15.957 "method": "bdev_nvme_attach_controller" 00:28:15.957 }' 00:28:15.957 15:04:57 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:15.957 15:04:57 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:15.957 15:04:57 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:15.957 15:04:57 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:28:15.957 15:04:57 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:15.957 15:04:57 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:15.957 15:04:57 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:15.957 15:04:57 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:15.957 15:04:57 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:15.957 15:04:57 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:15.957 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:15.957 ... 00:28:15.957 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:15.957 ... 00:28:15.957 fio-3.35 00:28:15.957 Starting 4 threads 00:28:15.957 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.247 00:28:21.247 filename0: (groupid=0, jobs=1): err= 0: pid=1255975: Fri Apr 26 15:05:03 2024 00:28:21.247 read: IOPS=2101, BW=16.4MiB/s (17.2MB/s)(82.1MiB/5002msec) 00:28:21.247 slat (usec): min=5, max=473, avg= 8.33, stdev= 5.31 00:28:21.247 clat (usec): min=2118, max=6797, avg=3782.88, stdev=654.87 00:28:21.247 lat (usec): min=2127, max=6831, avg=3791.21, stdev=654.72 00:28:21.247 clat percentiles (usec): 00:28:21.247 | 1.00th=[ 2769], 5.00th=[ 3130], 10.00th=[ 3261], 20.00th=[ 3392], 00:28:21.247 | 30.00th=[ 3458], 40.00th=[ 3523], 50.00th=[ 3556], 60.00th=[ 3654], 00:28:21.247 | 70.00th=[ 3752], 80.00th=[ 3982], 90.00th=[ 5211], 95.00th=[ 5276], 00:28:21.247 | 99.00th=[ 5735], 99.50th=[ 5800], 99.90th=[ 6390], 99.95th=[ 6521], 00:28:21.247 | 99.99th=[ 6587] 00:28:21.247 bw ( KiB/s): min=16624, max=17232, per=25.19%, avg=16835.56, stdev=211.93, samples=9 00:28:21.247 iops : min= 2078, max= 2154, avg=2104.44, stdev=26.49, samples=9 00:28:21.247 lat (msec) : 4=80.62%, 10=19.38% 00:28:21.247 cpu : usr=97.46%, sys=2.26%, ctx=13, majf=0, minf=79 00:28:21.247 IO depths : 1=0.1%, 2=0.2%, 4=72.6%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:21.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.247 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.247 issued rwts: total=10513,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.247 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:21.247 filename0: (groupid=0, jobs=1): err= 0: pid=1255976: Fri Apr 26 15:05:03 2024 00:28:21.247 read: IOPS=2064, BW=16.1MiB/s (16.9MB/s)(80.6MiB/5001msec) 00:28:21.247 slat (nsec): min=5334, max=68743, avg=6337.37, stdev=2210.46 00:28:21.247 clat (usec): min=1387, max=7150, avg=3857.81, stdev=708.19 00:28:21.247 lat (usec): min=1392, max=7158, avg=3864.15, stdev=708.17 00:28:21.247 clat percentiles (usec): 00:28:21.248 | 1.00th=[ 2900], 5.00th=[ 3228], 10.00th=[ 3359], 20.00th=[ 3458], 00:28:21.248 | 30.00th=[ 3458], 40.00th=[ 3523], 50.00th=[ 3556], 60.00th=[ 3687], 00:28:21.248 | 70.00th=[ 3785], 80.00th=[ 4047], 90.00th=[ 5211], 95.00th=[ 5407], 00:28:21.248 | 99.00th=[ 5866], 99.50th=[ 6063], 99.90th=[ 6325], 99.95th=[ 6390], 00:28:21.248 | 99.99th=[ 7177] 00:28:21.248 bw ( KiB/s): min=16144, max=16752, per=24.68%, avg=16497.78, stdev=174.07, samples=9 00:28:21.248 iops : min= 2018, max= 2094, avg=2062.22, stdev=21.76, samples=9 00:28:21.248 lat (msec) : 2=0.09%, 4=78.23%, 10=21.68% 00:28:21.248 cpu : usr=97.76%, sys=2.02%, ctx=6, majf=0, minf=116 00:28:21.248 IO depths : 1=0.2%, 2=0.4%, 4=72.4%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:21.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.248 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.248 issued rwts: total=10323,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.248 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:21.248 filename1: (groupid=0, jobs=1): err= 0: pid=1255977: Fri Apr 26 15:05:03 2024 00:28:21.248 read: IOPS=2076, BW=16.2MiB/s (17.0MB/s)(81.1MiB/5002msec) 00:28:21.248 slat (nsec): min=5345, max=69026, avg=8385.52, stdev=2623.15 00:28:21.248 clat (usec): min=1670, max=7408, avg=3829.96, stdev=701.82 00:28:21.248 lat (usec): min=1676, max=7441, avg=3838.34, stdev=701.65 00:28:21.248 clat percentiles (usec): 00:28:21.248 | 1.00th=[ 2769], 5.00th=[ 3163], 10.00th=[ 3294], 20.00th=[ 3425], 00:28:21.248 | 30.00th=[ 3458], 40.00th=[ 3523], 50.00th=[ 3589], 60.00th=[ 3687], 00:28:21.248 | 70.00th=[ 3752], 80.00th=[ 4113], 90.00th=[ 5211], 95.00th=[ 5276], 00:28:21.248 | 99.00th=[ 5800], 99.50th=[ 5932], 99.90th=[ 6259], 99.95th=[ 6915], 00:28:21.248 | 99.99th=[ 6980] 00:28:21.248 bw ( KiB/s): min=16496, max=16688, per=24.81%, avg=16581.33, stdev=75.05, samples=9 00:28:21.248 iops : min= 2062, max= 2086, avg=2072.67, stdev= 9.38, samples=9 00:28:21.248 lat (msec) : 2=0.08%, 4=77.79%, 10=22.14% 00:28:21.248 cpu : usr=97.76%, sys=1.94%, ctx=6, majf=0, minf=64 00:28:21.248 IO depths : 1=0.1%, 2=0.2%, 4=72.3%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:21.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.248 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.248 issued rwts: total=10385,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.248 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:21.248 filename1: (groupid=0, jobs=1): err= 0: pid=1255978: Fri Apr 26 15:05:03 2024 00:28:21.248 read: IOPS=2113, BW=16.5MiB/s (17.3MB/s)(82.6MiB/5003msec) 00:28:21.248 slat (nsec): min=2704, max=42660, avg=6250.81, stdev=1804.76 00:28:21.248 clat (usec): min=1250, max=6236, avg=3767.61, stdev=673.49 00:28:21.248 lat (usec): min=1256, max=6241, avg=3773.86, stdev=673.47 00:28:21.248 clat percentiles (usec): 00:28:21.248 | 1.00th=[ 2606], 5.00th=[ 3064], 10.00th=[ 3261], 20.00th=[ 3359], 00:28:21.248 | 30.00th=[ 3458], 40.00th=[ 3523], 50.00th=[ 3589], 60.00th=[ 3654], 00:28:21.248 | 70.00th=[ 3752], 80.00th=[ 3949], 90.00th=[ 5211], 95.00th=[ 5276], 00:28:21.248 | 99.00th=[ 5735], 99.50th=[ 5800], 99.90th=[ 6063], 99.95th=[ 6128], 00:28:21.248 | 99.99th=[ 6259] 00:28:21.248 bw ( KiB/s): min=16656, max=17216, per=25.30%, avg=16910.40, stdev=213.79, samples=10 00:28:21.248 iops : min= 2082, max= 2152, avg=2113.80, stdev=26.72, samples=10 00:28:21.248 lat (msec) : 2=0.23%, 4=80.72%, 10=19.06% 00:28:21.248 cpu : usr=97.94%, sys=1.82%, ctx=7, majf=0, minf=93 00:28:21.248 IO depths : 1=0.2%, 2=0.4%, 4=72.1%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:21.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.248 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.248 issued rwts: total=10574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.248 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:21.248 00:28:21.248 Run status group 0 (all jobs): 00:28:21.248 READ: bw=65.3MiB/s (68.4MB/s), 16.1MiB/s-16.5MiB/s (16.9MB/s-17.3MB/s), io=327MiB (342MB), run=5001-5003msec 00:28:21.248 15:05:03 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:21.248 15:05:03 -- target/dif.sh@43 -- # local sub 00:28:21.248 15:05:03 -- target/dif.sh@45 -- # for sub in "$@" 00:28:21.248 15:05:03 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:21.248 15:05:03 -- target/dif.sh@36 -- # local sub_id=0 00:28:21.248 15:05:03 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:21.248 15:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.248 15:05:03 -- common/autotest_common.sh@10 -- # set +x 00:28:21.248 15:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.248 15:05:03 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:21.248 15:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.248 15:05:03 -- common/autotest_common.sh@10 -- # set +x 00:28:21.248 15:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.248 15:05:03 -- target/dif.sh@45 -- # for sub in "$@" 00:28:21.248 15:05:03 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:21.248 15:05:03 -- target/dif.sh@36 -- # local sub_id=1 00:28:21.248 15:05:03 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:21.248 15:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.248 15:05:03 -- common/autotest_common.sh@10 -- # set +x 00:28:21.248 15:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.248 15:05:03 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:21.248 15:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.248 15:05:03 -- common/autotest_common.sh@10 -- # set +x 00:28:21.248 15:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.248 00:28:21.248 real 0m24.222s 00:28:21.248 user 5m14.051s 00:28:21.248 sys 0m3.664s 00:28:21.248 15:05:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:21.248 15:05:03 -- common/autotest_common.sh@10 -- # set +x 00:28:21.248 ************************************ 00:28:21.248 END TEST fio_dif_rand_params 00:28:21.248 ************************************ 00:28:21.248 15:05:03 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:21.248 15:05:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:21.248 15:05:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:21.248 15:05:03 -- common/autotest_common.sh@10 -- # set +x 00:28:21.248 ************************************ 00:28:21.248 START TEST fio_dif_digest 00:28:21.248 ************************************ 00:28:21.248 15:05:03 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:28:21.248 15:05:03 -- target/dif.sh@123 -- # local NULL_DIF 00:28:21.248 15:05:03 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:21.248 15:05:03 -- target/dif.sh@125 -- # local hdgst ddgst 00:28:21.248 15:05:03 -- target/dif.sh@127 -- # NULL_DIF=3 00:28:21.248 15:05:03 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:21.248 15:05:03 -- target/dif.sh@127 -- # numjobs=3 00:28:21.248 15:05:03 -- target/dif.sh@127 -- # iodepth=3 00:28:21.248 15:05:03 -- target/dif.sh@127 -- # runtime=10 00:28:21.248 15:05:03 -- target/dif.sh@128 -- # hdgst=true 00:28:21.248 15:05:03 -- target/dif.sh@128 -- # ddgst=true 00:28:21.248 15:05:03 -- target/dif.sh@130 -- # create_subsystems 0 00:28:21.248 15:05:03 -- target/dif.sh@28 -- # local sub 00:28:21.248 15:05:03 -- target/dif.sh@30 -- # for sub in "$@" 00:28:21.248 15:05:03 -- target/dif.sh@31 -- # create_subsystem 0 00:28:21.248 15:05:03 -- target/dif.sh@18 -- # local sub_id=0 00:28:21.248 15:05:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:21.248 15:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.248 15:05:03 -- common/autotest_common.sh@10 -- # set +x 00:28:21.248 bdev_null0 00:28:21.248 15:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.248 15:05:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:21.248 15:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.248 15:05:03 -- common/autotest_common.sh@10 -- # set +x 00:28:21.248 15:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.248 15:05:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:21.248 15:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.248 15:05:03 -- common/autotest_common.sh@10 -- # set +x 00:28:21.248 15:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.248 15:05:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:21.248 15:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.248 15:05:03 -- common/autotest_common.sh@10 -- # set +x 00:28:21.248 [2024-04-26 15:05:03.654772] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:21.248 15:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.248 15:05:03 -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:21.248 15:05:03 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:21.248 15:05:03 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:21.248 15:05:03 -- nvmf/common.sh@521 -- # config=() 00:28:21.248 15:05:03 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:21.248 15:05:03 -- nvmf/common.sh@521 -- # local subsystem config 00:28:21.248 15:05:03 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:21.248 15:05:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:21.248 15:05:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:21.248 { 00:28:21.248 "params": { 00:28:21.248 "name": "Nvme$subsystem", 00:28:21.248 "trtype": "$TEST_TRANSPORT", 00:28:21.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.248 "adrfam": "ipv4", 00:28:21.248 "trsvcid": "$NVMF_PORT", 00:28:21.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.248 "hdgst": ${hdgst:-false}, 00:28:21.248 "ddgst": ${ddgst:-false} 00:28:21.248 }, 00:28:21.248 "method": "bdev_nvme_attach_controller" 00:28:21.248 } 00:28:21.248 EOF 00:28:21.248 )") 00:28:21.248 15:05:03 -- target/dif.sh@82 -- # gen_fio_conf 00:28:21.248 15:05:03 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:21.248 15:05:03 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:21.248 15:05:03 -- target/dif.sh@54 -- # local file 00:28:21.248 15:05:03 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:21.248 15:05:03 -- target/dif.sh@56 -- # cat 00:28:21.248 15:05:03 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:21.249 15:05:03 -- common/autotest_common.sh@1327 -- # shift 00:28:21.249 15:05:03 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:21.249 15:05:03 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:21.249 15:05:03 -- nvmf/common.sh@543 -- # cat 00:28:21.249 15:05:03 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:21.249 15:05:03 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:21.249 15:05:03 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:21.249 15:05:03 -- target/dif.sh@72 -- # (( file <= files )) 00:28:21.249 15:05:03 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:21.249 15:05:03 -- nvmf/common.sh@545 -- # jq . 00:28:21.249 15:05:03 -- nvmf/common.sh@546 -- # IFS=, 00:28:21.249 15:05:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:28:21.249 "params": { 00:28:21.249 "name": "Nvme0", 00:28:21.249 "trtype": "tcp", 00:28:21.249 "traddr": "10.0.0.2", 00:28:21.249 "adrfam": "ipv4", 00:28:21.249 "trsvcid": "4420", 00:28:21.249 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:21.249 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:21.249 "hdgst": true, 00:28:21.249 "ddgst": true 00:28:21.249 }, 00:28:21.249 "method": "bdev_nvme_attach_controller" 00:28:21.249 }' 00:28:21.249 15:05:03 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:21.249 15:05:03 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:21.249 15:05:03 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:21.249 15:05:03 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:21.249 15:05:03 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:28:21.249 15:05:03 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:21.249 15:05:03 -- common/autotest_common.sh@1331 -- # asan_lib= 00:28:21.249 15:05:03 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:28:21.249 15:05:03 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:21.249 15:05:03 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:21.510 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:21.510 ... 00:28:21.510 fio-3.35 00:28:21.510 Starting 3 threads 00:28:21.510 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.832 00:28:33.832 filename0: (groupid=0, jobs=1): err= 0: pid=1257495: Fri Apr 26 15:05:14 2024 00:28:33.832 read: IOPS=210, BW=26.4MiB/s (27.6MB/s)(265MiB/10047msec) 00:28:33.832 slat (nsec): min=5569, max=99044, avg=6453.48, stdev=2254.43 00:28:33.832 clat (usec): min=8258, max=47306, avg=14183.57, stdev=1483.54 00:28:33.832 lat (usec): min=8265, max=47312, avg=14190.02, stdev=1483.57 00:28:33.832 clat percentiles (usec): 00:28:33.832 | 1.00th=[ 9634], 5.00th=[12125], 10.00th=[12780], 20.00th=[13304], 00:28:33.832 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:28:33.832 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15664], 95.00th=[16188], 00:28:33.832 | 99.00th=[17171], 99.50th=[17433], 99.90th=[18220], 99.95th=[18482], 00:28:33.832 | 99.99th=[47449] 00:28:33.832 bw ( KiB/s): min=26112, max=28160, per=32.72%, avg=27097.60, stdev=594.75, samples=20 00:28:33.832 iops : min= 204, max= 220, avg=211.70, stdev= 4.65, samples=20 00:28:33.832 lat (msec) : 10=1.32%, 20=98.63%, 50=0.05% 00:28:33.832 cpu : usr=96.38%, sys=3.39%, ctx=20, majf=0, minf=156 00:28:33.832 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:33.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.832 issued rwts: total=2118,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:33.832 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:33.832 filename0: (groupid=0, jobs=1): err= 0: pid=1257496: Fri Apr 26 15:05:14 2024 00:28:33.832 read: IOPS=219, BW=27.4MiB/s (28.7MB/s)(276MiB/10049msec) 00:28:33.832 slat (nsec): min=3025, max=21734, avg=6390.16, stdev=718.92 00:28:33.832 clat (usec): min=8640, max=56613, avg=13650.01, stdev=3160.10 00:28:33.832 lat (usec): min=8646, max=56620, avg=13656.40, stdev=3160.11 00:28:33.832 clat percentiles (usec): 00:28:33.832 | 1.00th=[10290], 5.00th=[11469], 10.00th=[11994], 20.00th=[12518], 00:28:33.832 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:28:33.832 | 70.00th=[14091], 80.00th=[14353], 90.00th=[15008], 95.00th=[15401], 00:28:33.832 | 99.00th=[16450], 99.50th=[17433], 99.90th=[55837], 99.95th=[56361], 00:28:33.832 | 99.99th=[56361] 00:28:33.832 bw ( KiB/s): min=25600, max=30720, per=34.04%, avg=28185.60, stdev=1344.14, samples=20 00:28:33.832 iops : min= 200, max= 240, avg=220.20, stdev=10.50, samples=20 00:28:33.832 lat (msec) : 10=0.73%, 20=98.77%, 100=0.50% 00:28:33.832 cpu : usr=96.23%, sys=3.55%, ctx=26, majf=0, minf=115 00:28:33.832 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:33.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.832 issued rwts: total=2204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:33.832 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:33.832 filename0: (groupid=0, jobs=1): err= 0: pid=1257497: Fri Apr 26 15:05:14 2024 00:28:33.832 read: IOPS=216, BW=27.1MiB/s (28.4MB/s)(272MiB/10045msec) 00:28:33.832 slat (nsec): min=5566, max=33324, avg=6422.95, stdev=1222.00 00:28:33.832 clat (usec): min=9092, max=56784, avg=13801.55, stdev=3046.43 00:28:33.832 lat (usec): min=9099, max=56790, avg=13807.97, stdev=3046.52 00:28:33.832 clat percentiles (usec): 00:28:33.832 | 1.00th=[10421], 5.00th=[11863], 10.00th=[12125], 20.00th=[12780], 00:28:33.832 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13566], 60.00th=[13829], 00:28:33.832 | 70.00th=[14091], 80.00th=[14484], 90.00th=[15008], 95.00th=[15401], 00:28:33.832 | 99.00th=[16581], 99.50th=[50070], 99.90th=[56361], 99.95th=[56361], 00:28:33.832 | 99.99th=[56886] 00:28:33.832 bw ( KiB/s): min=24832, max=29440, per=33.65%, avg=27865.60, stdev=1077.43, samples=20 00:28:33.832 iops : min= 194, max= 230, avg=217.70, stdev= 8.42, samples=20 00:28:33.832 lat (msec) : 10=0.46%, 20=99.04%, 100=0.50% 00:28:33.832 cpu : usr=96.08%, sys=3.70%, ctx=25, majf=0, minf=144 00:28:33.832 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:33.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.832 issued rwts: total=2179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:33.832 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:33.832 00:28:33.832 Run status group 0 (all jobs): 00:28:33.832 READ: bw=80.9MiB/s (84.8MB/s), 26.4MiB/s-27.4MiB/s (27.6MB/s-28.7MB/s), io=813MiB (852MB), run=10045-10049msec 00:28:33.832 15:05:14 -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:33.832 15:05:14 -- target/dif.sh@43 -- # local sub 00:28:33.832 15:05:14 -- target/dif.sh@45 -- # for sub in "$@" 00:28:33.832 15:05:14 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:33.832 15:05:14 -- target/dif.sh@36 -- # local sub_id=0 00:28:33.832 15:05:14 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:33.832 15:05:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:33.832 15:05:14 -- common/autotest_common.sh@10 -- # set +x 00:28:33.832 15:05:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:33.832 15:05:14 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:33.832 15:05:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:33.832 15:05:14 -- common/autotest_common.sh@10 -- # set +x 00:28:33.832 15:05:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:33.832 00:28:33.832 real 0m11.146s 00:28:33.832 user 0m40.446s 00:28:33.832 sys 0m1.409s 00:28:33.832 15:05:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:33.832 15:05:14 -- common/autotest_common.sh@10 -- # set +x 00:28:33.832 ************************************ 00:28:33.832 END TEST fio_dif_digest 00:28:33.832 ************************************ 00:28:33.832 15:05:14 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:33.832 15:05:14 -- target/dif.sh@147 -- # nvmftestfini 00:28:33.832 15:05:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:33.832 15:05:14 -- nvmf/common.sh@117 -- # sync 00:28:33.832 15:05:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:33.832 15:05:14 -- nvmf/common.sh@120 -- # set +e 00:28:33.832 15:05:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:33.832 15:05:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:33.832 rmmod nvme_tcp 00:28:33.832 rmmod nvme_fabrics 00:28:33.832 rmmod nvme_keyring 00:28:33.832 15:05:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:33.832 15:05:14 -- nvmf/common.sh@124 -- # set -e 00:28:33.832 15:05:14 -- nvmf/common.sh@125 -- # return 0 00:28:33.832 15:05:14 -- nvmf/common.sh@478 -- # '[' -n 1246925 ']' 00:28:33.832 15:05:14 -- nvmf/common.sh@479 -- # killprocess 1246925 00:28:33.832 15:05:14 -- common/autotest_common.sh@936 -- # '[' -z 1246925 ']' 00:28:33.832 15:05:14 -- common/autotest_common.sh@940 -- # kill -0 1246925 00:28:33.832 15:05:14 -- common/autotest_common.sh@941 -- # uname 00:28:33.832 15:05:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:33.832 15:05:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1246925 00:28:33.832 15:05:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:33.832 15:05:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:33.832 15:05:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1246925' 00:28:33.832 killing process with pid 1246925 00:28:33.832 15:05:14 -- common/autotest_common.sh@955 -- # kill 1246925 00:28:33.832 15:05:14 -- common/autotest_common.sh@960 -- # wait 1246925 00:28:33.832 15:05:15 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:28:33.832 15:05:15 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:35.750 Waiting for block devices as requested 00:28:35.750 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:35.750 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:35.750 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:36.011 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:36.011 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:36.011 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:36.273 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:36.273 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:36.273 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:36.534 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:36.534 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:36.534 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:36.795 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:36.795 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:36.795 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:36.795 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:37.056 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:37.318 15:05:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:37.318 15:05:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:37.318 15:05:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:37.318 15:05:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:37.318 15:05:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.318 15:05:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:37.318 15:05:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.233 15:05:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:39.233 00:28:39.233 real 1m17.457s 00:28:39.233 user 7m55.910s 00:28:39.233 sys 0m19.212s 00:28:39.233 15:05:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:39.233 15:05:21 -- common/autotest_common.sh@10 -- # set +x 00:28:39.233 ************************************ 00:28:39.233 END TEST nvmf_dif 00:28:39.233 ************************************ 00:28:39.233 15:05:21 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:39.495 15:05:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:39.495 15:05:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:39.495 15:05:21 -- common/autotest_common.sh@10 -- # set +x 00:28:39.495 ************************************ 00:28:39.495 START TEST nvmf_abort_qd_sizes 00:28:39.495 ************************************ 00:28:39.495 15:05:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:39.495 * Looking for test storage... 00:28:39.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:39.755 15:05:22 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:39.755 15:05:22 -- nvmf/common.sh@7 -- # uname -s 00:28:39.755 15:05:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:39.755 15:05:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:39.755 15:05:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:39.755 15:05:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:39.755 15:05:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:39.755 15:05:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:39.755 15:05:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:39.755 15:05:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:39.755 15:05:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:39.755 15:05:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:39.755 15:05:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:39.755 15:05:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:39.755 15:05:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:39.755 15:05:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:39.755 15:05:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:39.755 15:05:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:39.755 15:05:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:39.755 15:05:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.755 15:05:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.755 15:05:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.755 15:05:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.755 15:05:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.755 15:05:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.755 15:05:22 -- paths/export.sh@5 -- # export PATH 00:28:39.755 15:05:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.755 15:05:22 -- nvmf/common.sh@47 -- # : 0 00:28:39.755 15:05:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:39.755 15:05:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:39.755 15:05:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:39.755 15:05:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:39.755 15:05:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:39.755 15:05:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:39.755 15:05:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:39.755 15:05:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:39.755 15:05:22 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:39.755 15:05:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:39.755 15:05:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:39.755 15:05:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:39.755 15:05:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:39.755 15:05:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:39.755 15:05:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.756 15:05:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:39.756 15:05:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.756 15:05:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:39.756 15:05:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:39.756 15:05:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:39.756 15:05:22 -- common/autotest_common.sh@10 -- # set +x 00:28:46.340 15:05:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:46.340 15:05:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:46.340 15:05:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:46.340 15:05:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:46.340 15:05:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:46.340 15:05:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:46.340 15:05:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:46.340 15:05:28 -- nvmf/common.sh@295 -- # net_devs=() 00:28:46.340 15:05:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:46.341 15:05:28 -- nvmf/common.sh@296 -- # e810=() 00:28:46.341 15:05:28 -- nvmf/common.sh@296 -- # local -ga e810 00:28:46.341 15:05:28 -- nvmf/common.sh@297 -- # x722=() 00:28:46.341 15:05:28 -- nvmf/common.sh@297 -- # local -ga x722 00:28:46.341 15:05:28 -- nvmf/common.sh@298 -- # mlx=() 00:28:46.341 15:05:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:46.341 15:05:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:46.341 15:05:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:46.341 15:05:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:46.341 15:05:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:46.341 15:05:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:46.341 15:05:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:46.341 15:05:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:46.341 15:05:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:46.341 15:05:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:46.341 15:05:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:46.341 15:05:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:46.341 15:05:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:46.341 15:05:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:46.341 15:05:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:46.341 15:05:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:46.341 15:05:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:46.341 15:05:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:46.341 15:05:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:46.341 15:05:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:46.341 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:46.341 15:05:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:46.341 15:05:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:46.341 15:05:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.341 15:05:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.341 15:05:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:46.341 15:05:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:46.341 15:05:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:46.341 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:46.341 15:05:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:46.341 15:05:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:46.341 15:05:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.341 15:05:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.341 15:05:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:46.341 15:05:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:46.341 15:05:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:46.341 15:05:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:46.341 15:05:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:46.341 15:05:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.341 15:05:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:46.341 15:05:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.341 15:05:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:46.341 Found net devices under 0000:31:00.0: cvl_0_0 00:28:46.341 15:05:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.341 15:05:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:46.341 15:05:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.341 15:05:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:46.341 15:05:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.341 15:05:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:46.341 Found net devices under 0000:31:00.1: cvl_0_1 00:28:46.341 15:05:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.341 15:05:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:46.341 15:05:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:46.341 15:05:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:46.341 15:05:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:46.341 15:05:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:46.341 15:05:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:46.341 15:05:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:46.341 15:05:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:46.341 15:05:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:46.341 15:05:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:46.341 15:05:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:46.341 15:05:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:46.341 15:05:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:46.341 15:05:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:46.341 15:05:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:46.341 15:05:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:46.341 15:05:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:46.341 15:05:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:46.341 15:05:28 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:46.341 15:05:28 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:46.341 15:05:28 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:46.341 15:05:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:46.341 15:05:28 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:46.341 15:05:28 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:46.341 15:05:28 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:46.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:46.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:28:46.341 00:28:46.341 --- 10.0.0.2 ping statistics --- 00:28:46.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.341 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:28:46.341 15:05:28 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:46.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:46.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:28:46.341 00:28:46.341 --- 10.0.0.1 ping statistics --- 00:28:46.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.341 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:28:46.341 15:05:28 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:46.341 15:05:28 -- nvmf/common.sh@411 -- # return 0 00:28:46.341 15:05:28 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:28:46.341 15:05:28 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:49.646 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:49.646 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:49.646 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:49.646 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:49.646 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:49.646 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:49.646 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:49.646 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:49.907 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:49.907 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:49.907 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:49.907 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:49.907 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:49.907 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:49.907 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:49.907 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:49.907 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:50.168 15:05:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:50.168 15:05:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:50.168 15:05:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:50.168 15:05:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:50.168 15:05:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:50.168 15:05:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:50.168 15:05:32 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:50.168 15:05:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:50.168 15:05:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:50.168 15:05:32 -- common/autotest_common.sh@10 -- # set +x 00:28:50.168 15:05:32 -- nvmf/common.sh@470 -- # nvmfpid=1266986 00:28:50.168 15:05:32 -- nvmf/common.sh@471 -- # waitforlisten 1266986 00:28:50.168 15:05:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:50.168 15:05:32 -- common/autotest_common.sh@817 -- # '[' -z 1266986 ']' 00:28:50.168 15:05:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.168 15:05:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:50.168 15:05:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.168 15:05:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:50.168 15:05:32 -- common/autotest_common.sh@10 -- # set +x 00:28:50.430 [2024-04-26 15:05:32.863187] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:28:50.430 [2024-04-26 15:05:32.863239] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.430 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.430 [2024-04-26 15:05:32.932991] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:50.430 [2024-04-26 15:05:33.000697] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.430 [2024-04-26 15:05:33.000740] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.430 [2024-04-26 15:05:33.000748] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:50.430 [2024-04-26 15:05:33.000756] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:50.430 [2024-04-26 15:05:33.000763] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.430 [2024-04-26 15:05:33.000870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.430 [2024-04-26 15:05:33.001097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.430 [2024-04-26 15:05:33.001098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:50.430 [2024-04-26 15:05:33.000946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:51.001 15:05:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:51.001 15:05:33 -- common/autotest_common.sh@850 -- # return 0 00:28:51.001 15:05:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:51.001 15:05:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:51.001 15:05:33 -- common/autotest_common.sh@10 -- # set +x 00:28:51.262 15:05:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:51.262 15:05:33 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:51.262 15:05:33 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:51.262 15:05:33 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:51.262 15:05:33 -- scripts/common.sh@309 -- # local bdf bdfs 00:28:51.262 15:05:33 -- scripts/common.sh@310 -- # local nvmes 00:28:51.262 15:05:33 -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:28:51.262 15:05:33 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:28:51.262 15:05:33 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:51.262 15:05:33 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:28:51.262 15:05:33 -- scripts/common.sh@320 -- # uname -s 00:28:51.262 15:05:33 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:51.262 15:05:33 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:51.262 15:05:33 -- scripts/common.sh@325 -- # (( 1 )) 00:28:51.262 15:05:33 -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:28:51.262 15:05:33 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:28:51.262 15:05:33 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:28:51.262 15:05:33 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:51.262 15:05:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:51.262 15:05:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:51.262 15:05:33 -- common/autotest_common.sh@10 -- # set +x 00:28:51.262 ************************************ 00:28:51.262 START TEST spdk_target_abort 00:28:51.262 ************************************ 00:28:51.262 15:05:33 -- common/autotest_common.sh@1111 -- # spdk_target 00:28:51.262 15:05:33 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:51.262 15:05:33 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:28:51.262 15:05:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.262 15:05:33 -- common/autotest_common.sh@10 -- # set +x 00:28:51.522 spdk_targetn1 00:28:51.522 15:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.522 15:05:34 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:51.522 15:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.522 15:05:34 -- common/autotest_common.sh@10 -- # set +x 00:28:51.522 [2024-04-26 15:05:34.139941] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:51.522 15:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.522 15:05:34 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:51.522 15:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.522 15:05:34 -- common/autotest_common.sh@10 -- # set +x 00:28:51.522 15:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.522 15:05:34 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:51.522 15:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.522 15:05:34 -- common/autotest_common.sh@10 -- # set +x 00:28:51.522 15:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.522 15:05:34 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:51.522 15:05:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:51.522 15:05:34 -- common/autotest_common.sh@10 -- # set +x 00:28:51.522 [2024-04-26 15:05:34.180209] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:51.522 15:05:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:51.522 15:05:34 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:51.522 15:05:34 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:51.522 15:05:34 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:51.522 15:05:34 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:51.522 15:05:34 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:51.522 15:05:34 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:51.522 15:05:34 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:51.522 15:05:34 -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:51.522 15:05:34 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:51.522 15:05:34 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:51.522 15:05:34 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:51.522 15:05:34 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:51.782 15:05:34 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:51.782 15:05:34 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:51.782 15:05:34 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:51.782 15:05:34 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:51.782 15:05:34 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:51.782 15:05:34 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:51.782 15:05:34 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:51.782 15:05:34 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:51.782 15:05:34 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:51.782 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.782 [2024-04-26 15:05:34.316267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:512 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:28:51.782 [2024-04-26 15:05:34.316290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0041 p:1 m:0 dnr:0 00:28:51.782 [2024-04-26 15:05:34.373845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2576 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:28:51.782 [2024-04-26 15:05:34.373863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:55.079 Initializing NVMe Controllers 00:28:55.079 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:55.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:55.079 Initialization complete. Launching workers. 00:28:55.079 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13126, failed: 2 00:28:55.079 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3177, failed to submit 9951 00:28:55.079 success 759, unsuccess 2418, failed 0 00:28:55.079 15:05:37 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:55.079 15:05:37 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:55.079 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.079 [2024-04-26 15:05:37.448104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:480 len:8 PRP1 0x200007c4c000 PRP2 0x0 00:28:55.079 [2024-04-26 15:05:37.448138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:28:55.079 [2024-04-26 15:05:37.536002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:2344 len:8 PRP1 0x200007c54000 PRP2 0x0 00:28:55.079 [2024-04-26 15:05:37.536031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:55.079 [2024-04-26 15:05:37.615129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:176 nsid:1 lba:4296 len:8 PRP1 0x200007c46000 PRP2 0x0 00:28:55.079 [2024-04-26 15:05:37.615154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:176 cdw0:0 sqhd:001a p:1 m:0 dnr:0 00:28:56.022 [2024-04-26 15:05:38.426969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:170 nsid:1 lba:22424 len:8 PRP1 0x200007c54000 PRP2 0x0 00:28:56.022 [2024-04-26 15:05:38.427005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:170 cdw0:0 sqhd:00f9 p:1 m:0 dnr:0 00:28:56.961 [2024-04-26 15:05:39.339225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:43128 len:8 PRP1 0x200007c50000 PRP2 0x0 00:28:56.961 [2024-04-26 15:05:39.339256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:57.928 Initializing NVMe Controllers 00:28:57.928 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:57.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:57.928 Initialization complete. Launching workers. 00:28:57.928 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8497, failed: 5 00:28:57.928 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1217, failed to submit 7285 00:28:57.928 success 386, unsuccess 831, failed 0 00:28:57.928 15:05:40 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:57.928 15:05:40 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:58.189 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.100 [2024-04-26 15:05:42.353500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:147 nsid:1 lba:184256 len:8 PRP1 0x200007922000 PRP2 0x0 00:29:00.100 [2024-04-26 15:05:42.353546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:147 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:01.482 Initializing NVMe Controllers 00:29:01.482 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:01.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:01.482 Initialization complete. Launching workers. 00:29:01.482 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41892, failed: 1 00:29:01.482 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2735, failed to submit 39158 00:29:01.482 success 561, unsuccess 2174, failed 0 00:29:01.482 15:05:43 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:01.482 15:05:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:01.482 15:05:43 -- common/autotest_common.sh@10 -- # set +x 00:29:01.482 15:05:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:01.482 15:05:43 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:01.482 15:05:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:01.482 15:05:43 -- common/autotest_common.sh@10 -- # set +x 00:29:03.391 15:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:03.391 15:05:45 -- target/abort_qd_sizes.sh@61 -- # killprocess 1266986 00:29:03.391 15:05:45 -- common/autotest_common.sh@936 -- # '[' -z 1266986 ']' 00:29:03.391 15:05:45 -- common/autotest_common.sh@940 -- # kill -0 1266986 00:29:03.391 15:05:45 -- common/autotest_common.sh@941 -- # uname 00:29:03.391 15:05:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:03.391 15:05:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1266986 00:29:03.391 15:05:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:03.391 15:05:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:03.391 15:05:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1266986' 00:29:03.391 killing process with pid 1266986 00:29:03.391 15:05:45 -- common/autotest_common.sh@955 -- # kill 1266986 00:29:03.391 15:05:45 -- common/autotest_common.sh@960 -- # wait 1266986 00:29:03.391 00:29:03.391 real 0m11.929s 00:29:03.391 user 0m49.124s 00:29:03.391 sys 0m1.660s 00:29:03.391 15:05:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:03.391 15:05:45 -- common/autotest_common.sh@10 -- # set +x 00:29:03.391 ************************************ 00:29:03.391 END TEST spdk_target_abort 00:29:03.391 ************************************ 00:29:03.391 15:05:45 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:03.391 15:05:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:03.391 15:05:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:03.391 15:05:45 -- common/autotest_common.sh@10 -- # set +x 00:29:03.391 ************************************ 00:29:03.391 START TEST kernel_target_abort 00:29:03.391 ************************************ 00:29:03.391 15:05:45 -- common/autotest_common.sh@1111 -- # kernel_target 00:29:03.391 15:05:45 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:03.391 15:05:45 -- nvmf/common.sh@717 -- # local ip 00:29:03.391 15:05:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:03.391 15:05:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:03.391 15:05:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.391 15:05:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.391 15:05:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:03.391 15:05:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.391 15:05:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:03.391 15:05:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:03.391 15:05:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:03.391 15:05:45 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:03.391 15:05:45 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:03.391 15:05:45 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:29:03.391 15:05:45 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:03.391 15:05:45 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:03.391 15:05:45 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:03.391 15:05:45 -- nvmf/common.sh@628 -- # local block nvme 00:29:03.391 15:05:45 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:29:03.391 15:05:45 -- nvmf/common.sh@631 -- # modprobe nvmet 00:29:03.391 15:05:45 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:03.391 15:05:45 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:06.719 Waiting for block devices as requested 00:29:06.719 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:06.719 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:06.719 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:06.980 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:06.980 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:06.980 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:07.241 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:07.241 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:07.241 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:29:07.501 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:07.501 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:07.501 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:07.762 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:07.762 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:07.762 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:08.022 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:08.022 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:08.282 15:05:50 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:29:08.282 15:05:50 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:08.282 15:05:50 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:29:08.282 15:05:50 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:29:08.282 15:05:50 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:08.282 15:05:50 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:08.282 15:05:50 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:29:08.282 15:05:50 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:08.282 15:05:50 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:08.282 No valid GPT data, bailing 00:29:08.282 15:05:50 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:08.282 15:05:50 -- scripts/common.sh@391 -- # pt= 00:29:08.282 15:05:50 -- scripts/common.sh@392 -- # return 1 00:29:08.282 15:05:50 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:29:08.282 15:05:50 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:29:08.282 15:05:50 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:08.282 15:05:50 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:08.282 15:05:50 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:08.282 15:05:50 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:08.282 15:05:50 -- nvmf/common.sh@656 -- # echo 1 00:29:08.282 15:05:50 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:29:08.282 15:05:50 -- nvmf/common.sh@658 -- # echo 1 00:29:08.282 15:05:50 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:29:08.282 15:05:50 -- nvmf/common.sh@661 -- # echo tcp 00:29:08.282 15:05:50 -- nvmf/common.sh@662 -- # echo 4420 00:29:08.282 15:05:50 -- nvmf/common.sh@663 -- # echo ipv4 00:29:08.282 15:05:50 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:08.282 15:05:50 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:29:08.282 00:29:08.282 Discovery Log Number of Records 2, Generation counter 2 00:29:08.282 =====Discovery Log Entry 0====== 00:29:08.282 trtype: tcp 00:29:08.282 adrfam: ipv4 00:29:08.282 subtype: current discovery subsystem 00:29:08.282 treq: not specified, sq flow control disable supported 00:29:08.282 portid: 1 00:29:08.282 trsvcid: 4420 00:29:08.282 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:08.282 traddr: 10.0.0.1 00:29:08.282 eflags: none 00:29:08.282 sectype: none 00:29:08.282 =====Discovery Log Entry 1====== 00:29:08.282 trtype: tcp 00:29:08.282 adrfam: ipv4 00:29:08.282 subtype: nvme subsystem 00:29:08.282 treq: not specified, sq flow control disable supported 00:29:08.282 portid: 1 00:29:08.282 trsvcid: 4420 00:29:08.282 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:08.282 traddr: 10.0.0.1 00:29:08.282 eflags: none 00:29:08.282 sectype: none 00:29:08.282 15:05:50 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:08.282 15:05:50 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:08.282 15:05:50 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:08.282 15:05:50 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:08.282 15:05:50 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:08.282 15:05:50 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:08.282 15:05:50 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:08.282 15:05:50 -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:08.282 15:05:50 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:08.282 15:05:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:08.282 15:05:50 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:08.282 15:05:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:08.282 15:05:50 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:08.282 15:05:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:08.282 15:05:50 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:08.282 15:05:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:08.282 15:05:50 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:08.282 15:05:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:08.282 15:05:50 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:08.282 15:05:50 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:08.282 15:05:50 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:08.543 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.837 Initializing NVMe Controllers 00:29:11.837 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:11.837 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:11.837 Initialization complete. Launching workers. 00:29:11.837 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65639, failed: 0 00:29:11.837 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 65639, failed to submit 0 00:29:11.837 success 0, unsuccess 65639, failed 0 00:29:11.837 15:05:54 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:11.837 15:05:54 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:11.837 EAL: No free 2048 kB hugepages reported on node 1 00:29:15.137 Initializing NVMe Controllers 00:29:15.137 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:15.137 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:15.137 Initialization complete. Launching workers. 00:29:15.137 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 106900, failed: 0 00:29:15.137 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26930, failed to submit 79970 00:29:15.137 success 0, unsuccess 26930, failed 0 00:29:15.137 15:05:57 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:15.137 15:05:57 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:15.137 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.680 Initializing NVMe Controllers 00:29:17.680 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:17.680 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:17.680 Initialization complete. Launching workers. 00:29:17.680 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 102423, failed: 0 00:29:17.680 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25598, failed to submit 76825 00:29:17.680 success 0, unsuccess 25598, failed 0 00:29:17.680 15:06:00 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:29:17.680 15:06:00 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:17.680 15:06:00 -- nvmf/common.sh@675 -- # echo 0 00:29:17.680 15:06:00 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:17.680 15:06:00 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:17.680 15:06:00 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:17.680 15:06:00 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:17.680 15:06:00 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:29:17.680 15:06:00 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:29:17.680 15:06:00 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:21.889 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:21.889 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:21.889 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:21.889 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:21.889 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:21.889 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:21.889 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:21.889 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:21.889 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:21.889 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:21.889 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:21.889 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:21.889 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:21.889 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:21.889 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:21.889 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:23.275 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:29:23.536 00:29:23.537 real 0m20.035s 00:29:23.537 user 0m9.625s 00:29:23.537 sys 0m5.961s 00:29:23.537 15:06:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:23.537 15:06:05 -- common/autotest_common.sh@10 -- # set +x 00:29:23.537 ************************************ 00:29:23.537 END TEST kernel_target_abort 00:29:23.537 ************************************ 00:29:23.537 15:06:06 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:23.537 15:06:06 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:29:23.537 15:06:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:23.537 15:06:06 -- nvmf/common.sh@117 -- # sync 00:29:23.537 15:06:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:23.537 15:06:06 -- nvmf/common.sh@120 -- # set +e 00:29:23.537 15:06:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:23.537 15:06:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:23.537 rmmod nvme_tcp 00:29:23.537 rmmod nvme_fabrics 00:29:23.537 rmmod nvme_keyring 00:29:23.537 15:06:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:23.537 15:06:06 -- nvmf/common.sh@124 -- # set -e 00:29:23.537 15:06:06 -- nvmf/common.sh@125 -- # return 0 00:29:23.537 15:06:06 -- nvmf/common.sh@478 -- # '[' -n 1266986 ']' 00:29:23.537 15:06:06 -- nvmf/common.sh@479 -- # killprocess 1266986 00:29:23.537 15:06:06 -- common/autotest_common.sh@936 -- # '[' -z 1266986 ']' 00:29:23.537 15:06:06 -- common/autotest_common.sh@940 -- # kill -0 1266986 00:29:23.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1266986) - No such process 00:29:23.537 15:06:06 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1266986 is not found' 00:29:23.537 Process with pid 1266986 is not found 00:29:23.537 15:06:06 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:29:23.537 15:06:06 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:26.838 Waiting for block devices as requested 00:29:26.838 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:27.099 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:27.099 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:27.099 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:27.360 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:27.360 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:27.360 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:27.621 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:27.621 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:29:27.883 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:27.883 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:27.883 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:28.144 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:28.144 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:28.144 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:28.144 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:28.405 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:28.666 15:06:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:28.666 15:06:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:28.666 15:06:11 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:28.666 15:06:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:28.666 15:06:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.666 15:06:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:28.666 15:06:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.581 15:06:13 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:30.581 00:29:30.581 real 0m51.128s 00:29:30.581 user 1m3.943s 00:29:30.581 sys 0m18.082s 00:29:30.581 15:06:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:30.581 15:06:13 -- common/autotest_common.sh@10 -- # set +x 00:29:30.581 ************************************ 00:29:30.581 END TEST nvmf_abort_qd_sizes 00:29:30.581 ************************************ 00:29:30.581 15:06:13 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:30.581 15:06:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:30.581 15:06:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:30.581 15:06:13 -- common/autotest_common.sh@10 -- # set +x 00:29:30.841 ************************************ 00:29:30.841 START TEST keyring_file 00:29:30.841 ************************************ 00:29:30.841 15:06:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:30.841 * Looking for test storage... 00:29:30.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:30.841 15:06:13 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:30.841 15:06:13 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:30.841 15:06:13 -- nvmf/common.sh@7 -- # uname -s 00:29:30.841 15:06:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:30.841 15:06:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:30.841 15:06:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:30.841 15:06:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:30.841 15:06:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:30.841 15:06:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:30.841 15:06:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:30.841 15:06:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:30.841 15:06:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:30.841 15:06:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:30.841 15:06:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:30.841 15:06:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:30.841 15:06:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:30.841 15:06:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:30.841 15:06:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:30.841 15:06:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:30.841 15:06:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:30.841 15:06:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:30.841 15:06:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:30.841 15:06:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:30.841 15:06:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.841 15:06:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.841 15:06:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.841 15:06:13 -- paths/export.sh@5 -- # export PATH 00:29:30.842 15:06:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.842 15:06:13 -- nvmf/common.sh@47 -- # : 0 00:29:30.842 15:06:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:30.842 15:06:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:30.842 15:06:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:30.842 15:06:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:30.842 15:06:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:30.842 15:06:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:30.842 15:06:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:30.842 15:06:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:30.842 15:06:13 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:30.842 15:06:13 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:30.842 15:06:13 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:30.842 15:06:13 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:30.842 15:06:13 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:30.842 15:06:13 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:30.842 15:06:13 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:30.842 15:06:13 -- keyring/common.sh@15 -- # local name key digest path 00:29:30.842 15:06:13 -- keyring/common.sh@17 -- # name=key0 00:29:30.842 15:06:13 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:30.842 15:06:13 -- keyring/common.sh@17 -- # digest=0 00:29:30.842 15:06:13 -- keyring/common.sh@18 -- # mktemp 00:29:30.842 15:06:13 -- keyring/common.sh@18 -- # path=/tmp/tmp.1zMebiWAxy 00:29:30.842 15:06:13 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:30.842 15:06:13 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:30.842 15:06:13 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:30.842 15:06:13 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:30.842 15:06:13 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:29:30.842 15:06:13 -- nvmf/common.sh@693 -- # digest=0 00:29:30.842 15:06:13 -- nvmf/common.sh@694 -- # python - 00:29:31.102 15:06:13 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1zMebiWAxy 00:29:31.102 15:06:13 -- keyring/common.sh@23 -- # echo /tmp/tmp.1zMebiWAxy 00:29:31.102 15:06:13 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.1zMebiWAxy 00:29:31.102 15:06:13 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:31.102 15:06:13 -- keyring/common.sh@15 -- # local name key digest path 00:29:31.102 15:06:13 -- keyring/common.sh@17 -- # name=key1 00:29:31.102 15:06:13 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:31.102 15:06:13 -- keyring/common.sh@17 -- # digest=0 00:29:31.102 15:06:13 -- keyring/common.sh@18 -- # mktemp 00:29:31.102 15:06:13 -- keyring/common.sh@18 -- # path=/tmp/tmp.Vy1X2ODsnB 00:29:31.102 15:06:13 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:31.102 15:06:13 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:31.102 15:06:13 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:31.102 15:06:13 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:31.102 15:06:13 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:29:31.102 15:06:13 -- nvmf/common.sh@693 -- # digest=0 00:29:31.102 15:06:13 -- nvmf/common.sh@694 -- # python - 00:29:31.102 15:06:13 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Vy1X2ODsnB 00:29:31.102 15:06:13 -- keyring/common.sh@23 -- # echo /tmp/tmp.Vy1X2ODsnB 00:29:31.102 15:06:13 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Vy1X2ODsnB 00:29:31.102 15:06:13 -- keyring/file.sh@30 -- # tgtpid=1277916 00:29:31.102 15:06:13 -- keyring/file.sh@32 -- # waitforlisten 1277916 00:29:31.102 15:06:13 -- common/autotest_common.sh@817 -- # '[' -z 1277916 ']' 00:29:31.102 15:06:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.102 15:06:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:31.102 15:06:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.102 15:06:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:31.102 15:06:13 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:31.102 15:06:13 -- common/autotest_common.sh@10 -- # set +x 00:29:31.102 [2024-04-26 15:06:13.643937] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:29:31.102 [2024-04-26 15:06:13.644010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1277916 ] 00:29:31.102 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.102 [2024-04-26 15:06:13.708575] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.363 [2024-04-26 15:06:13.781037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.934 15:06:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:31.934 15:06:14 -- common/autotest_common.sh@850 -- # return 0 00:29:31.934 15:06:14 -- keyring/file.sh@33 -- # rpc_cmd 00:29:31.934 15:06:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.934 15:06:14 -- common/autotest_common.sh@10 -- # set +x 00:29:31.934 [2024-04-26 15:06:14.418116] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.934 null0 00:29:31.934 [2024-04-26 15:06:14.450167] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:31.934 [2024-04-26 15:06:14.450507] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:31.934 [2024-04-26 15:06:14.458178] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:31.934 15:06:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:31.934 15:06:14 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:31.934 15:06:14 -- common/autotest_common.sh@638 -- # local es=0 00:29:31.934 15:06:14 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:31.934 15:06:14 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:31.934 15:06:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:31.934 15:06:14 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:31.934 15:06:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:31.934 15:06:14 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:31.934 15:06:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:31.934 15:06:14 -- common/autotest_common.sh@10 -- # set +x 00:29:31.934 [2024-04-26 15:06:14.470210] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:29:31.934 { 00:29:31.934 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:31.934 "secure_channel": false, 00:29:31.934 "listen_address": { 00:29:31.934 "trtype": "tcp", 00:29:31.934 "traddr": "127.0.0.1", 00:29:31.934 "trsvcid": "4420" 00:29:31.934 }, 00:29:31.934 "method": "nvmf_subsystem_add_listener", 00:29:31.934 "req_id": 1 00:29:31.934 } 00:29:31.934 Got JSON-RPC error response 00:29:31.934 response: 00:29:31.934 { 00:29:31.934 "code": -32602, 00:29:31.934 "message": "Invalid parameters" 00:29:31.934 } 00:29:31.934 15:06:14 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:31.934 15:06:14 -- common/autotest_common.sh@641 -- # es=1 00:29:31.934 15:06:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:31.934 15:06:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:31.934 15:06:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:31.934 15:06:14 -- keyring/file.sh@46 -- # bperfpid=1277935 00:29:31.934 15:06:14 -- keyring/file.sh@48 -- # waitforlisten 1277935 /var/tmp/bperf.sock 00:29:31.934 15:06:14 -- common/autotest_common.sh@817 -- # '[' -z 1277935 ']' 00:29:31.934 15:06:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:31.934 15:06:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:31.934 15:06:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:31.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:31.934 15:06:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:31.934 15:06:14 -- common/autotest_common.sh@10 -- # set +x 00:29:31.934 15:06:14 -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:31.934 [2024-04-26 15:06:14.529794] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:29:31.935 [2024-04-26 15:06:14.529848] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1277935 ] 00:29:31.935 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.194 [2024-04-26 15:06:14.604240] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.194 [2024-04-26 15:06:14.668139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.765 15:06:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:32.765 15:06:15 -- common/autotest_common.sh@850 -- # return 0 00:29:32.765 15:06:15 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1zMebiWAxy 00:29:32.765 15:06:15 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1zMebiWAxy 00:29:32.765 15:06:15 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Vy1X2ODsnB 00:29:32.765 15:06:15 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Vy1X2ODsnB 00:29:33.024 15:06:15 -- keyring/file.sh@51 -- # get_key key0 00:29:33.024 15:06:15 -- keyring/file.sh@51 -- # jq -r .path 00:29:33.024 15:06:15 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:33.024 15:06:15 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:33.024 15:06:15 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:33.285 15:06:15 -- keyring/file.sh@51 -- # [[ /tmp/tmp.1zMebiWAxy == \/\t\m\p\/\t\m\p\.\1\z\M\e\b\i\W\A\x\y ]] 00:29:33.286 15:06:15 -- keyring/file.sh@52 -- # get_key key1 00:29:33.286 15:06:15 -- keyring/file.sh@52 -- # jq -r .path 00:29:33.286 15:06:15 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:33.286 15:06:15 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:33.286 15:06:15 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:33.286 15:06:15 -- keyring/file.sh@52 -- # [[ /tmp/tmp.Vy1X2ODsnB == \/\t\m\p\/\t\m\p\.\V\y\1\X\2\O\D\s\n\B ]] 00:29:33.286 15:06:15 -- keyring/file.sh@53 -- # get_refcnt key0 00:29:33.286 15:06:15 -- keyring/common.sh@12 -- # get_key key0 00:29:33.286 15:06:15 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:33.286 15:06:15 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:33.286 15:06:15 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:33.286 15:06:15 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:33.546 15:06:16 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:33.546 15:06:16 -- keyring/file.sh@54 -- # get_refcnt key1 00:29:33.546 15:06:16 -- keyring/common.sh@12 -- # get_key key1 00:29:33.546 15:06:16 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:33.546 15:06:16 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:33.546 15:06:16 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:33.546 15:06:16 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:33.806 15:06:16 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:33.806 15:06:16 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:33.806 15:06:16 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:33.806 [2024-04-26 15:06:16.364775] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:33.806 nvme0n1 00:29:33.806 15:06:16 -- keyring/file.sh@59 -- # get_refcnt key0 00:29:33.806 15:06:16 -- keyring/common.sh@12 -- # get_key key0 00:29:33.806 15:06:16 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:33.806 15:06:16 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:33.806 15:06:16 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:33.806 15:06:16 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:34.066 15:06:16 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:34.066 15:06:16 -- keyring/file.sh@60 -- # get_refcnt key1 00:29:34.066 15:06:16 -- keyring/common.sh@12 -- # get_key key1 00:29:34.066 15:06:16 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:34.066 15:06:16 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:34.066 15:06:16 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:34.066 15:06:16 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:34.327 15:06:16 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:34.327 15:06:16 -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:34.327 Running I/O for 1 seconds... 00:29:35.267 00:29:35.267 Latency(us) 00:29:35.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.267 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:35.267 nvme0n1 : 1.01 13787.94 53.86 0.00 0.00 9239.12 6171.31 17148.59 00:29:35.267 =================================================================================================================== 00:29:35.267 Total : 13787.94 53.86 0.00 0.00 9239.12 6171.31 17148.59 00:29:35.267 0 00:29:35.267 15:06:17 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:35.267 15:06:17 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:35.528 15:06:18 -- keyring/file.sh@65 -- # get_refcnt key0 00:29:35.528 15:06:18 -- keyring/common.sh@12 -- # get_key key0 00:29:35.528 15:06:18 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:35.528 15:06:18 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:35.528 15:06:18 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:35.528 15:06:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:35.788 15:06:18 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:35.788 15:06:18 -- keyring/file.sh@66 -- # get_refcnt key1 00:29:35.788 15:06:18 -- keyring/common.sh@12 -- # get_key key1 00:29:35.788 15:06:18 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:35.788 15:06:18 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:35.788 15:06:18 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:35.788 15:06:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:35.788 15:06:18 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:35.788 15:06:18 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:35.788 15:06:18 -- common/autotest_common.sh@638 -- # local es=0 00:29:35.788 15:06:18 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:35.788 15:06:18 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:35.788 15:06:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:35.788 15:06:18 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:35.788 15:06:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:35.788 15:06:18 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:35.788 15:06:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:36.047 [2024-04-26 15:06:18.532974] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:36.047 [2024-04-26 15:06:18.533640] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250fba0 (107): Transport endpoint is not connected 00:29:36.048 [2024-04-26 15:06:18.534636] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250fba0 (9): Bad file descriptor 00:29:36.048 [2024-04-26 15:06:18.535637] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:36.048 [2024-04-26 15:06:18.535644] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:36.048 [2024-04-26 15:06:18.535649] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:36.048 request: 00:29:36.048 { 00:29:36.048 "name": "nvme0", 00:29:36.048 "trtype": "tcp", 00:29:36.048 "traddr": "127.0.0.1", 00:29:36.048 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:36.048 "adrfam": "ipv4", 00:29:36.048 "trsvcid": "4420", 00:29:36.048 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:36.048 "psk": "key1", 00:29:36.048 "method": "bdev_nvme_attach_controller", 00:29:36.048 "req_id": 1 00:29:36.048 } 00:29:36.048 Got JSON-RPC error response 00:29:36.048 response: 00:29:36.048 { 00:29:36.048 "code": -32602, 00:29:36.048 "message": "Invalid parameters" 00:29:36.048 } 00:29:36.048 15:06:18 -- common/autotest_common.sh@641 -- # es=1 00:29:36.048 15:06:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:36.048 15:06:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:36.048 15:06:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:36.048 15:06:18 -- keyring/file.sh@71 -- # get_refcnt key0 00:29:36.048 15:06:18 -- keyring/common.sh@12 -- # get_key key0 00:29:36.048 15:06:18 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:36.048 15:06:18 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:36.048 15:06:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:36.048 15:06:18 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:36.048 15:06:18 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:36.048 15:06:18 -- keyring/file.sh@72 -- # get_refcnt key1 00:29:36.048 15:06:18 -- keyring/common.sh@12 -- # get_key key1 00:29:36.048 15:06:18 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:36.048 15:06:18 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:36.048 15:06:18 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:36.048 15:06:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:36.308 15:06:18 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:36.308 15:06:18 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:36.308 15:06:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:36.568 15:06:19 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:36.568 15:06:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:36.568 15:06:19 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:36.568 15:06:19 -- keyring/file.sh@77 -- # jq length 00:29:36.568 15:06:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:36.827 15:06:19 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:36.828 15:06:19 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.1zMebiWAxy 00:29:36.828 15:06:19 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.1zMebiWAxy 00:29:36.828 15:06:19 -- common/autotest_common.sh@638 -- # local es=0 00:29:36.828 15:06:19 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.1zMebiWAxy 00:29:36.828 15:06:19 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:36.828 15:06:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:36.828 15:06:19 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:36.828 15:06:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:36.828 15:06:19 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1zMebiWAxy 00:29:36.828 15:06:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1zMebiWAxy 00:29:36.828 [2024-04-26 15:06:19.459396] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.1zMebiWAxy': 0100660 00:29:36.828 [2024-04-26 15:06:19.459415] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:36.828 request: 00:29:36.828 { 00:29:36.828 "name": "key0", 00:29:36.828 "path": "/tmp/tmp.1zMebiWAxy", 00:29:36.828 "method": "keyring_file_add_key", 00:29:36.828 "req_id": 1 00:29:36.828 } 00:29:36.828 Got JSON-RPC error response 00:29:36.828 response: 00:29:36.828 { 00:29:36.828 "code": -1, 00:29:36.828 "message": "Operation not permitted" 00:29:36.828 } 00:29:36.828 15:06:19 -- common/autotest_common.sh@641 -- # es=1 00:29:36.828 15:06:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:36.828 15:06:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:36.828 15:06:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:36.828 15:06:19 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.1zMebiWAxy 00:29:36.828 15:06:19 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1zMebiWAxy 00:29:36.828 15:06:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1zMebiWAxy 00:29:37.087 15:06:19 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.1zMebiWAxy 00:29:37.087 15:06:19 -- keyring/file.sh@88 -- # get_refcnt key0 00:29:37.087 15:06:19 -- keyring/common.sh@12 -- # get_key key0 00:29:37.087 15:06:19 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:37.087 15:06:19 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:37.087 15:06:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:37.087 15:06:19 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:37.377 15:06:19 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:37.377 15:06:19 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:37.377 15:06:19 -- common/autotest_common.sh@638 -- # local es=0 00:29:37.377 15:06:19 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:37.377 15:06:19 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:37.377 15:06:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:37.377 15:06:19 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:37.377 15:06:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:37.377 15:06:19 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:37.377 15:06:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:37.377 [2024-04-26 15:06:19.920563] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.1zMebiWAxy': No such file or directory 00:29:37.377 [2024-04-26 15:06:19.920583] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:37.377 [2024-04-26 15:06:19.920600] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:37.377 [2024-04-26 15:06:19.920605] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:37.377 [2024-04-26 15:06:19.920609] bdev_nvme.c:6208:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:37.377 request: 00:29:37.377 { 00:29:37.377 "name": "nvme0", 00:29:37.377 "trtype": "tcp", 00:29:37.377 "traddr": "127.0.0.1", 00:29:37.377 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:37.377 "adrfam": "ipv4", 00:29:37.377 "trsvcid": "4420", 00:29:37.377 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:37.377 "psk": "key0", 00:29:37.377 "method": "bdev_nvme_attach_controller", 00:29:37.377 "req_id": 1 00:29:37.377 } 00:29:37.377 Got JSON-RPC error response 00:29:37.377 response: 00:29:37.377 { 00:29:37.377 "code": -19, 00:29:37.377 "message": "No such device" 00:29:37.377 } 00:29:37.377 15:06:19 -- common/autotest_common.sh@641 -- # es=1 00:29:37.377 15:06:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:37.377 15:06:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:37.377 15:06:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:37.377 15:06:19 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:37.377 15:06:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:37.662 15:06:20 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:37.662 15:06:20 -- keyring/common.sh@15 -- # local name key digest path 00:29:37.662 15:06:20 -- keyring/common.sh@17 -- # name=key0 00:29:37.662 15:06:20 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:37.662 15:06:20 -- keyring/common.sh@17 -- # digest=0 00:29:37.662 15:06:20 -- keyring/common.sh@18 -- # mktemp 00:29:37.662 15:06:20 -- keyring/common.sh@18 -- # path=/tmp/tmp.MSEBc4lVpG 00:29:37.662 15:06:20 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:37.662 15:06:20 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:37.662 15:06:20 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:37.662 15:06:20 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:37.662 15:06:20 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:29:37.662 15:06:20 -- nvmf/common.sh@693 -- # digest=0 00:29:37.662 15:06:20 -- nvmf/common.sh@694 -- # python - 00:29:37.662 15:06:20 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.MSEBc4lVpG 00:29:37.662 15:06:20 -- keyring/common.sh@23 -- # echo /tmp/tmp.MSEBc4lVpG 00:29:37.662 15:06:20 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.MSEBc4lVpG 00:29:37.662 15:06:20 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MSEBc4lVpG 00:29:37.662 15:06:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MSEBc4lVpG 00:29:37.922 15:06:20 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:37.922 15:06:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:37.922 nvme0n1 00:29:37.922 15:06:20 -- keyring/file.sh@99 -- # get_refcnt key0 00:29:37.922 15:06:20 -- keyring/common.sh@12 -- # get_key key0 00:29:37.922 15:06:20 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:37.922 15:06:20 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:37.922 15:06:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:37.922 15:06:20 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:38.182 15:06:20 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:38.182 15:06:20 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:38.182 15:06:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:38.443 15:06:20 -- keyring/file.sh@101 -- # get_key key0 00:29:38.443 15:06:20 -- keyring/file.sh@101 -- # jq -r .removed 00:29:38.443 15:06:20 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:38.443 15:06:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:38.443 15:06:20 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:38.443 15:06:21 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:38.443 15:06:21 -- keyring/file.sh@102 -- # get_refcnt key0 00:29:38.443 15:06:21 -- keyring/common.sh@12 -- # get_key key0 00:29:38.443 15:06:21 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:38.443 15:06:21 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:38.443 15:06:21 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:38.443 15:06:21 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:38.703 15:06:21 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:38.703 15:06:21 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:38.703 15:06:21 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:38.963 15:06:21 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:38.963 15:06:21 -- keyring/file.sh@104 -- # jq length 00:29:38.963 15:06:21 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:38.963 15:06:21 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:38.963 15:06:21 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MSEBc4lVpG 00:29:38.963 15:06:21 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MSEBc4lVpG 00:29:39.223 15:06:21 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Vy1X2ODsnB 00:29:39.223 15:06:21 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Vy1X2ODsnB 00:29:39.223 15:06:21 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:39.223 15:06:21 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:39.483 nvme0n1 00:29:39.483 15:06:22 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:39.483 15:06:22 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:39.745 15:06:22 -- keyring/file.sh@112 -- # config='{ 00:29:39.745 "subsystems": [ 00:29:39.745 { 00:29:39.745 "subsystem": "keyring", 00:29:39.745 "config": [ 00:29:39.745 { 00:29:39.745 "method": "keyring_file_add_key", 00:29:39.745 "params": { 00:29:39.745 "name": "key0", 00:29:39.745 "path": "/tmp/tmp.MSEBc4lVpG" 00:29:39.745 } 00:29:39.745 }, 00:29:39.745 { 00:29:39.745 "method": "keyring_file_add_key", 00:29:39.745 "params": { 00:29:39.745 "name": "key1", 00:29:39.745 "path": "/tmp/tmp.Vy1X2ODsnB" 00:29:39.745 } 00:29:39.745 } 00:29:39.745 ] 00:29:39.745 }, 00:29:39.745 { 00:29:39.745 "subsystem": "iobuf", 00:29:39.745 "config": [ 00:29:39.745 { 00:29:39.745 "method": "iobuf_set_options", 00:29:39.745 "params": { 00:29:39.745 "small_pool_count": 8192, 00:29:39.745 "large_pool_count": 1024, 00:29:39.745 "small_bufsize": 8192, 00:29:39.745 "large_bufsize": 135168 00:29:39.745 } 00:29:39.745 } 00:29:39.745 ] 00:29:39.745 }, 00:29:39.745 { 00:29:39.745 "subsystem": "sock", 00:29:39.745 "config": [ 00:29:39.745 { 00:29:39.745 "method": "sock_impl_set_options", 00:29:39.745 "params": { 00:29:39.745 "impl_name": "posix", 00:29:39.745 "recv_buf_size": 2097152, 00:29:39.745 "send_buf_size": 2097152, 00:29:39.745 "enable_recv_pipe": true, 00:29:39.745 "enable_quickack": false, 00:29:39.745 "enable_placement_id": 0, 00:29:39.745 "enable_zerocopy_send_server": true, 00:29:39.745 "enable_zerocopy_send_client": false, 00:29:39.745 "zerocopy_threshold": 0, 00:29:39.745 "tls_version": 0, 00:29:39.745 "enable_ktls": false 00:29:39.745 } 00:29:39.745 }, 00:29:39.745 { 00:29:39.745 "method": "sock_impl_set_options", 00:29:39.745 "params": { 00:29:39.745 "impl_name": "ssl", 00:29:39.745 "recv_buf_size": 4096, 00:29:39.745 "send_buf_size": 4096, 00:29:39.745 "enable_recv_pipe": true, 00:29:39.745 "enable_quickack": false, 00:29:39.745 "enable_placement_id": 0, 00:29:39.745 "enable_zerocopy_send_server": true, 00:29:39.745 "enable_zerocopy_send_client": false, 00:29:39.745 "zerocopy_threshold": 0, 00:29:39.745 "tls_version": 0, 00:29:39.745 "enable_ktls": false 00:29:39.745 } 00:29:39.745 } 00:29:39.745 ] 00:29:39.745 }, 00:29:39.745 { 00:29:39.745 "subsystem": "vmd", 00:29:39.745 "config": [] 00:29:39.745 }, 00:29:39.745 { 00:29:39.745 "subsystem": "accel", 00:29:39.745 "config": [ 00:29:39.745 { 00:29:39.745 "method": "accel_set_options", 00:29:39.745 "params": { 00:29:39.745 "small_cache_size": 128, 00:29:39.745 "large_cache_size": 16, 00:29:39.745 "task_count": 2048, 00:29:39.745 "sequence_count": 2048, 00:29:39.745 "buf_count": 2048 00:29:39.745 } 00:29:39.745 } 00:29:39.745 ] 00:29:39.745 }, 00:29:39.745 { 00:29:39.745 "subsystem": "bdev", 00:29:39.745 "config": [ 00:29:39.745 { 00:29:39.745 "method": "bdev_set_options", 00:29:39.745 "params": { 00:29:39.745 "bdev_io_pool_size": 65535, 00:29:39.745 "bdev_io_cache_size": 256, 00:29:39.745 "bdev_auto_examine": true, 00:29:39.745 "iobuf_small_cache_size": 128, 00:29:39.745 "iobuf_large_cache_size": 16 00:29:39.745 } 00:29:39.745 }, 00:29:39.745 { 00:29:39.745 "method": "bdev_raid_set_options", 00:29:39.745 "params": { 00:29:39.745 "process_window_size_kb": 1024 00:29:39.745 } 00:29:39.745 }, 00:29:39.745 { 00:29:39.745 "method": "bdev_iscsi_set_options", 00:29:39.745 "params": { 00:29:39.745 "timeout_sec": 30 00:29:39.745 } 00:29:39.745 }, 00:29:39.745 { 00:29:39.745 "method": "bdev_nvme_set_options", 00:29:39.745 "params": { 00:29:39.745 "action_on_timeout": "none", 00:29:39.745 "timeout_us": 0, 00:29:39.745 "timeout_admin_us": 0, 00:29:39.745 "keep_alive_timeout_ms": 10000, 00:29:39.745 "arbitration_burst": 0, 00:29:39.745 "low_priority_weight": 0, 00:29:39.745 "medium_priority_weight": 0, 00:29:39.745 "high_priority_weight": 0, 00:29:39.745 "nvme_adminq_poll_period_us": 10000, 00:29:39.745 "nvme_ioq_poll_period_us": 0, 00:29:39.745 "io_queue_requests": 512, 00:29:39.745 "delay_cmd_submit": true, 00:29:39.745 "transport_retry_count": 4, 00:29:39.745 "bdev_retry_count": 3, 00:29:39.745 "transport_ack_timeout": 0, 00:29:39.745 "ctrlr_loss_timeout_sec": 0, 00:29:39.745 "reconnect_delay_sec": 0, 00:29:39.745 "fast_io_fail_timeout_sec": 0, 00:29:39.745 "disable_auto_failback": false, 00:29:39.745 "generate_uuids": false, 00:29:39.745 "transport_tos": 0, 00:29:39.745 "nvme_error_stat": false, 00:29:39.745 "rdma_srq_size": 0, 00:29:39.745 "io_path_stat": false, 00:29:39.745 "allow_accel_sequence": false, 00:29:39.745 "rdma_max_cq_size": 0, 00:29:39.745 "rdma_cm_event_timeout_ms": 0, 00:29:39.745 "dhchap_digests": [ 00:29:39.745 "sha256", 00:29:39.745 "sha384", 00:29:39.745 "sha512" 00:29:39.745 ], 00:29:39.746 "dhchap_dhgroups": [ 00:29:39.746 "null", 00:29:39.746 "ffdhe2048", 00:29:39.746 "ffdhe3072", 00:29:39.746 "ffdhe4096", 00:29:39.746 "ffdhe6144", 00:29:39.746 "ffdhe8192" 00:29:39.746 ] 00:29:39.746 } 00:29:39.746 }, 00:29:39.746 { 00:29:39.746 "method": "bdev_nvme_attach_controller", 00:29:39.746 "params": { 00:29:39.746 "name": "nvme0", 00:29:39.746 "trtype": "TCP", 00:29:39.746 "adrfam": "IPv4", 00:29:39.746 "traddr": "127.0.0.1", 00:29:39.746 "trsvcid": "4420", 00:29:39.746 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:39.746 "prchk_reftag": false, 00:29:39.746 "prchk_guard": false, 00:29:39.746 "ctrlr_loss_timeout_sec": 0, 00:29:39.746 "reconnect_delay_sec": 0, 00:29:39.746 "fast_io_fail_timeout_sec": 0, 00:29:39.746 "psk": "key0", 00:29:39.746 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:39.746 "hdgst": false, 00:29:39.746 "ddgst": false 00:29:39.746 } 00:29:39.746 }, 00:29:39.746 { 00:29:39.746 "method": "bdev_nvme_set_hotplug", 00:29:39.746 "params": { 00:29:39.746 "period_us": 100000, 00:29:39.746 "enable": false 00:29:39.746 } 00:29:39.746 }, 00:29:39.746 { 00:29:39.746 "method": "bdev_wait_for_examine" 00:29:39.746 } 00:29:39.746 ] 00:29:39.746 }, 00:29:39.746 { 00:29:39.746 "subsystem": "nbd", 00:29:39.746 "config": [] 00:29:39.746 } 00:29:39.746 ] 00:29:39.746 }' 00:29:39.746 15:06:22 -- keyring/file.sh@114 -- # killprocess 1277935 00:29:39.746 15:06:22 -- common/autotest_common.sh@936 -- # '[' -z 1277935 ']' 00:29:39.746 15:06:22 -- common/autotest_common.sh@940 -- # kill -0 1277935 00:29:39.746 15:06:22 -- common/autotest_common.sh@941 -- # uname 00:29:39.746 15:06:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:39.746 15:06:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1277935 00:29:39.746 15:06:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:39.746 15:06:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:39.746 15:06:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1277935' 00:29:39.746 killing process with pid 1277935 00:29:39.746 15:06:22 -- common/autotest_common.sh@955 -- # kill 1277935 00:29:39.746 Received shutdown signal, test time was about 1.000000 seconds 00:29:39.746 00:29:39.746 Latency(us) 00:29:39.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.746 =================================================================================================================== 00:29:39.746 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:39.746 15:06:22 -- common/autotest_common.sh@960 -- # wait 1277935 00:29:40.007 15:06:22 -- keyring/file.sh@117 -- # bperfpid=1279728 00:29:40.007 15:06:22 -- keyring/file.sh@119 -- # waitforlisten 1279728 /var/tmp/bperf.sock 00:29:40.007 15:06:22 -- common/autotest_common.sh@817 -- # '[' -z 1279728 ']' 00:29:40.007 15:06:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:40.007 15:06:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:40.007 15:06:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:40.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:40.007 15:06:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:40.007 15:06:22 -- common/autotest_common.sh@10 -- # set +x 00:29:40.007 15:06:22 -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:40.007 15:06:22 -- keyring/file.sh@115 -- # echo '{ 00:29:40.007 "subsystems": [ 00:29:40.007 { 00:29:40.007 "subsystem": "keyring", 00:29:40.007 "config": [ 00:29:40.007 { 00:29:40.007 "method": "keyring_file_add_key", 00:29:40.007 "params": { 00:29:40.007 "name": "key0", 00:29:40.007 "path": "/tmp/tmp.MSEBc4lVpG" 00:29:40.007 } 00:29:40.007 }, 00:29:40.007 { 00:29:40.007 "method": "keyring_file_add_key", 00:29:40.007 "params": { 00:29:40.007 "name": "key1", 00:29:40.007 "path": "/tmp/tmp.Vy1X2ODsnB" 00:29:40.007 } 00:29:40.007 } 00:29:40.007 ] 00:29:40.007 }, 00:29:40.007 { 00:29:40.007 "subsystem": "iobuf", 00:29:40.007 "config": [ 00:29:40.007 { 00:29:40.007 "method": "iobuf_set_options", 00:29:40.007 "params": { 00:29:40.007 "small_pool_count": 8192, 00:29:40.007 "large_pool_count": 1024, 00:29:40.007 "small_bufsize": 8192, 00:29:40.007 "large_bufsize": 135168 00:29:40.007 } 00:29:40.007 } 00:29:40.007 ] 00:29:40.007 }, 00:29:40.007 { 00:29:40.007 "subsystem": "sock", 00:29:40.007 "config": [ 00:29:40.007 { 00:29:40.007 "method": "sock_impl_set_options", 00:29:40.007 "params": { 00:29:40.007 "impl_name": "posix", 00:29:40.007 "recv_buf_size": 2097152, 00:29:40.007 "send_buf_size": 2097152, 00:29:40.007 "enable_recv_pipe": true, 00:29:40.007 "enable_quickack": false, 00:29:40.007 "enable_placement_id": 0, 00:29:40.007 "enable_zerocopy_send_server": true, 00:29:40.007 "enable_zerocopy_send_client": false, 00:29:40.007 "zerocopy_threshold": 0, 00:29:40.007 "tls_version": 0, 00:29:40.007 "enable_ktls": false 00:29:40.007 } 00:29:40.007 }, 00:29:40.007 { 00:29:40.007 "method": "sock_impl_set_options", 00:29:40.007 "params": { 00:29:40.007 "impl_name": "ssl", 00:29:40.007 "recv_buf_size": 4096, 00:29:40.007 "send_buf_size": 4096, 00:29:40.007 "enable_recv_pipe": true, 00:29:40.007 "enable_quickack": false, 00:29:40.007 "enable_placement_id": 0, 00:29:40.007 "enable_zerocopy_send_server": true, 00:29:40.007 "enable_zerocopy_send_client": false, 00:29:40.007 "zerocopy_threshold": 0, 00:29:40.007 "tls_version": 0, 00:29:40.007 "enable_ktls": false 00:29:40.007 } 00:29:40.007 } 00:29:40.007 ] 00:29:40.007 }, 00:29:40.007 { 00:29:40.007 "subsystem": "vmd", 00:29:40.007 "config": [] 00:29:40.007 }, 00:29:40.007 { 00:29:40.007 "subsystem": "accel", 00:29:40.007 "config": [ 00:29:40.007 { 00:29:40.007 "method": "accel_set_options", 00:29:40.007 "params": { 00:29:40.007 "small_cache_size": 128, 00:29:40.007 "large_cache_size": 16, 00:29:40.007 "task_count": 2048, 00:29:40.007 "sequence_count": 2048, 00:29:40.007 "buf_count": 2048 00:29:40.007 } 00:29:40.007 } 00:29:40.007 ] 00:29:40.007 }, 00:29:40.007 { 00:29:40.007 "subsystem": "bdev", 00:29:40.007 "config": [ 00:29:40.007 { 00:29:40.007 "method": "bdev_set_options", 00:29:40.007 "params": { 00:29:40.007 "bdev_io_pool_size": 65535, 00:29:40.007 "bdev_io_cache_size": 256, 00:29:40.007 "bdev_auto_examine": true, 00:29:40.007 "iobuf_small_cache_size": 128, 00:29:40.007 "iobuf_large_cache_size": 16 00:29:40.007 } 00:29:40.007 }, 00:29:40.007 { 00:29:40.007 "method": "bdev_raid_set_options", 00:29:40.007 "params": { 00:29:40.007 "process_window_size_kb": 1024 00:29:40.007 } 00:29:40.007 }, 00:29:40.007 { 00:29:40.007 "method": "bdev_iscsi_set_options", 00:29:40.007 "params": { 00:29:40.007 "timeout_sec": 30 00:29:40.007 } 00:29:40.007 }, 00:29:40.007 { 00:29:40.007 "method": "bdev_nvme_set_options", 00:29:40.007 "params": { 00:29:40.007 "action_on_timeout": "none", 00:29:40.007 "timeout_us": 0, 00:29:40.007 "timeout_admin_us": 0, 00:29:40.007 "keep_alive_timeout_ms": 10000, 00:29:40.007 "arbitration_burst": 0, 00:29:40.007 "low_priority_weight": 0, 00:29:40.007 "medium_priority_weight": 0, 00:29:40.007 "high_priority_weight": 0, 00:29:40.007 "nvme_adminq_poll_period_us": 10000, 00:29:40.007 "nvme_ioq_poll_period_us": 0, 00:29:40.007 "io_queue_requests": 512, 00:29:40.007 "delay_cmd_submit": true, 00:29:40.007 "transport_retry_count": 4, 00:29:40.007 "bdev_retry_count": 3, 00:29:40.007 "transport_ack_timeout": 0, 00:29:40.007 "ctrlr_loss_timeout_sec": 0, 00:29:40.007 "reconnect_delay_sec": 0, 00:29:40.007 "fast_io_fail_timeout_sec": 0, 00:29:40.007 "disable_auto_failback": false, 00:29:40.007 "generate_uuids": false, 00:29:40.007 "transport_tos": 0, 00:29:40.007 "nvme_error_stat": false, 00:29:40.007 "rdma_srq_size": 0, 00:29:40.007 "io_path_stat": false, 00:29:40.007 "allow_accel_sequence": false, 00:29:40.007 "rdma_max_cq_size": 0, 00:29:40.007 "rdma_cm_event_timeout_ms": 0, 00:29:40.007 "dhchap_digests": [ 00:29:40.007 "sha256", 00:29:40.007 "sha384", 00:29:40.007 "sha512" 00:29:40.007 ], 00:29:40.007 "dhchap_dhgroups": [ 00:29:40.007 "null", 00:29:40.007 "ffdhe2048", 00:29:40.007 "ffdhe3072", 00:29:40.007 "ffdhe4096", 00:29:40.007 "ffdhe6144", 00:29:40.007 "ffdhe8192" 00:29:40.007 ] 00:29:40.007 } 00:29:40.007 }, 00:29:40.007 { 00:29:40.007 "method": "bdev_nvme_attach_controller", 00:29:40.007 "params": { 00:29:40.007 "name": "nvme0", 00:29:40.007 "trtype": "TCP", 00:29:40.007 "adrfam": "IPv4", 00:29:40.007 "traddr": "127.0.0.1", 00:29:40.007 "trsvcid": "4420", 00:29:40.007 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:40.007 "prchk_reftag": false, 00:29:40.007 "prchk_guard": false, 00:29:40.007 "ctrlr_loss_timeout_sec": 0, 00:29:40.007 "reconnect_delay_sec": 0, 00:29:40.007 "fast_io_fail_timeout_sec": 0, 00:29:40.007 "psk": "key0", 00:29:40.007 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:40.007 "hdgst": false, 00:29:40.007 "ddgst": false 00:29:40.007 } 00:29:40.007 }, 00:29:40.007 { 00:29:40.007 "method": "bdev_nvme_set_hotplug", 00:29:40.007 "params": { 00:29:40.007 "period_us": 100000, 00:29:40.007 "enable": false 00:29:40.007 } 00:29:40.007 }, 00:29:40.007 { 00:29:40.007 "method": "bdev_wait_for_examine" 00:29:40.007 } 00:29:40.007 ] 00:29:40.007 }, 00:29:40.007 { 00:29:40.007 "subsystem": "nbd", 00:29:40.007 "config": [] 00:29:40.007 } 00:29:40.007 ] 00:29:40.007 }' 00:29:40.007 [2024-04-26 15:06:22.522542] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:29:40.007 [2024-04-26 15:06:22.522599] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1279728 ] 00:29:40.007 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.007 [2024-04-26 15:06:22.595072] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.007 [2024-04-26 15:06:22.646383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.268 [2024-04-26 15:06:22.780226] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:40.838 15:06:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:40.838 15:06:23 -- common/autotest_common.sh@850 -- # return 0 00:29:40.838 15:06:23 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:40.838 15:06:23 -- keyring/file.sh@120 -- # jq length 00:29:40.838 15:06:23 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:40.838 15:06:23 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:40.838 15:06:23 -- keyring/file.sh@121 -- # get_refcnt key0 00:29:40.838 15:06:23 -- keyring/common.sh@12 -- # get_key key0 00:29:40.838 15:06:23 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:40.838 15:06:23 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:40.838 15:06:23 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:40.838 15:06:23 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:41.098 15:06:23 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:41.098 15:06:23 -- keyring/file.sh@122 -- # get_refcnt key1 00:29:41.098 15:06:23 -- keyring/common.sh@12 -- # get_key key1 00:29:41.098 15:06:23 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:41.098 15:06:23 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:41.098 15:06:23 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:41.098 15:06:23 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:41.098 15:06:23 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:41.098 15:06:23 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:41.098 15:06:23 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:41.098 15:06:23 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:41.359 15:06:23 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:41.359 15:06:23 -- keyring/file.sh@1 -- # cleanup 00:29:41.359 15:06:23 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.MSEBc4lVpG /tmp/tmp.Vy1X2ODsnB 00:29:41.359 15:06:23 -- keyring/file.sh@20 -- # killprocess 1279728 00:29:41.359 15:06:23 -- common/autotest_common.sh@936 -- # '[' -z 1279728 ']' 00:29:41.359 15:06:23 -- common/autotest_common.sh@940 -- # kill -0 1279728 00:29:41.359 15:06:23 -- common/autotest_common.sh@941 -- # uname 00:29:41.359 15:06:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:41.359 15:06:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1279728 00:29:41.359 15:06:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:41.359 15:06:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:41.359 15:06:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1279728' 00:29:41.359 killing process with pid 1279728 00:29:41.359 15:06:23 -- common/autotest_common.sh@955 -- # kill 1279728 00:29:41.359 Received shutdown signal, test time was about 1.000000 seconds 00:29:41.359 00:29:41.359 Latency(us) 00:29:41.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.360 =================================================================================================================== 00:29:41.360 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:41.360 15:06:23 -- common/autotest_common.sh@960 -- # wait 1279728 00:29:41.620 15:06:24 -- keyring/file.sh@21 -- # killprocess 1277916 00:29:41.621 15:06:24 -- common/autotest_common.sh@936 -- # '[' -z 1277916 ']' 00:29:41.621 15:06:24 -- common/autotest_common.sh@940 -- # kill -0 1277916 00:29:41.621 15:06:24 -- common/autotest_common.sh@941 -- # uname 00:29:41.621 15:06:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:41.621 15:06:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1277916 00:29:41.621 15:06:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:41.621 15:06:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:41.621 15:06:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1277916' 00:29:41.621 killing process with pid 1277916 00:29:41.621 15:06:24 -- common/autotest_common.sh@955 -- # kill 1277916 00:29:41.621 [2024-04-26 15:06:24.139757] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:41.621 15:06:24 -- common/autotest_common.sh@960 -- # wait 1277916 00:29:41.881 00:29:41.881 real 0m10.971s 00:29:41.881 user 0m26.160s 00:29:41.881 sys 0m2.607s 00:29:41.881 15:06:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:41.882 15:06:24 -- common/autotest_common.sh@10 -- # set +x 00:29:41.882 ************************************ 00:29:41.882 END TEST keyring_file 00:29:41.882 ************************************ 00:29:41.882 15:06:24 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:29:41.882 15:06:24 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:29:41.882 15:06:24 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:29:41.882 15:06:24 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:29:41.882 15:06:24 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:41.882 15:06:24 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:29:41.882 15:06:24 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:41.882 15:06:24 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:29:41.882 15:06:24 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:29:41.882 15:06:24 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:29:41.882 15:06:24 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:41.882 15:06:24 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:29:41.882 15:06:24 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:29:41.882 15:06:24 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:29:41.882 15:06:24 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:29:41.882 15:06:24 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:29:41.882 15:06:24 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:29:41.882 15:06:24 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:29:41.882 15:06:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:41.882 15:06:24 -- common/autotest_common.sh@10 -- # set +x 00:29:41.882 15:06:24 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:29:41.882 15:06:24 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:29:41.882 15:06:24 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:29:41.882 15:06:24 -- common/autotest_common.sh@10 -- # set +x 00:29:50.024 INFO: APP EXITING 00:29:50.024 INFO: killing all VMs 00:29:50.024 INFO: killing vhost app 00:29:50.024 INFO: EXIT DONE 00:29:52.569 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:29:52.569 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:29:52.569 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:29:52.569 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:29:52.569 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:29:52.569 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:29:52.569 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:29:52.569 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:29:52.569 0000:65:00.0 (144d a80a): Already using the nvme driver 00:29:52.569 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:29:52.569 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:29:52.569 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:29:52.569 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:29:52.569 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:29:52.569 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:29:52.569 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:29:52.569 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:29:55.867 Cleaning 00:29:55.867 Removing: /var/run/dpdk/spdk0/config 00:29:55.868 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:55.868 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:55.868 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:55.868 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:55.868 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:55.868 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:55.868 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:55.868 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:55.868 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:55.868 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:55.868 Removing: /var/run/dpdk/spdk1/config 00:29:55.868 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:55.868 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:55.868 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:55.868 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:55.868 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:55.868 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:55.868 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:55.868 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:55.868 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:55.868 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:55.868 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:55.868 Removing: /var/run/dpdk/spdk2/config 00:29:55.868 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:55.868 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:55.868 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:55.868 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:55.868 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:55.868 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:55.868 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:55.868 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:55.868 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:55.868 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:55.868 Removing: /var/run/dpdk/spdk3/config 00:29:55.868 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:55.868 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:55.868 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:55.868 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:55.868 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:55.868 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:55.868 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:55.868 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:55.868 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:55.868 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:55.868 Removing: /var/run/dpdk/spdk4/config 00:29:55.868 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:55.868 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:55.868 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:55.868 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:55.868 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:55.868 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:55.868 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:55.868 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:55.868 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:55.868 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:55.868 Removing: /dev/shm/bdev_svc_trace.1 00:29:55.868 Removing: /dev/shm/nvmf_trace.0 00:29:55.868 Removing: /dev/shm/spdk_tgt_trace.pid858199 00:29:55.868 Removing: /var/run/dpdk/spdk0 00:29:55.868 Removing: /var/run/dpdk/spdk1 00:29:55.868 Removing: /var/run/dpdk/spdk2 00:29:55.868 Removing: /var/run/dpdk/spdk3 00:29:55.868 Removing: /var/run/dpdk/spdk4 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1006214 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1006568 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1012108 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1019385 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1022478 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1034787 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1045725 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1047835 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1048955 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1070043 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1074794 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1080262 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1082191 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1084284 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1084621 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1084843 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1084981 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1085691 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1087995 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1088979 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1089496 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1092199 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1092904 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1093620 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1098634 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1110797 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1115734 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1123486 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1125007 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1126845 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1132076 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1137103 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1146314 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1146316 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1151433 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1151703 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1151795 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1152425 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1152445 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1157642 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1158386 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1163728 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1166981 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1174152 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1180586 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1189029 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1189031 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1212090 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1212935 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1213748 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1214446 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1215518 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1216195 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1216884 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1217569 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1222744 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1223042 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1231009 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1231138 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1233907 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1241080 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1241091 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1247061 00:29:55.868 Removing: /var/run/dpdk/spdk_pid1249578 00:29:56.130 Removing: /var/run/dpdk/spdk_pid1251950 00:29:56.130 Removing: /var/run/dpdk/spdk_pid1253295 00:29:56.130 Removing: /var/run/dpdk/spdk_pid1255780 00:29:56.130 Removing: /var/run/dpdk/spdk_pid1257051 00:29:56.130 Removing: /var/run/dpdk/spdk_pid1267186 00:29:56.130 Removing: /var/run/dpdk/spdk_pid1267766 00:29:56.130 Removing: /var/run/dpdk/spdk_pid1268429 00:29:56.130 Removing: /var/run/dpdk/spdk_pid1271404 00:29:56.130 Removing: /var/run/dpdk/spdk_pid1271981 00:29:56.130 Removing: /var/run/dpdk/spdk_pid1272450 00:29:56.130 Removing: /var/run/dpdk/spdk_pid1277916 00:29:56.130 Removing: /var/run/dpdk/spdk_pid1277935 00:29:56.130 Removing: /var/run/dpdk/spdk_pid1279728 00:29:56.130 Removing: /var/run/dpdk/spdk_pid856630 00:29:56.130 Removing: /var/run/dpdk/spdk_pid858199 00:29:56.130 Removing: /var/run/dpdk/spdk_pid859138 00:29:56.130 Removing: /var/run/dpdk/spdk_pid860655 00:29:56.130 Removing: /var/run/dpdk/spdk_pid860979 00:29:56.130 Removing: /var/run/dpdk/spdk_pid862052 00:29:56.130 Removing: /var/run/dpdk/spdk_pid862387 00:29:56.130 Removing: /var/run/dpdk/spdk_pid862576 00:29:56.130 Removing: /var/run/dpdk/spdk_pid863659 00:29:56.130 Removing: /var/run/dpdk/spdk_pid864441 00:29:56.130 Removing: /var/run/dpdk/spdk_pid864827 00:29:56.130 Removing: /var/run/dpdk/spdk_pid865146 00:29:56.130 Removing: /var/run/dpdk/spdk_pid865495 00:29:56.130 Removing: /var/run/dpdk/spdk_pid865871 00:29:56.130 Removing: /var/run/dpdk/spdk_pid866103 00:29:56.130 Removing: /var/run/dpdk/spdk_pid866461 00:29:56.130 Removing: /var/run/dpdk/spdk_pid866847 00:29:56.130 Removing: /var/run/dpdk/spdk_pid868263 00:29:56.130 Removing: /var/run/dpdk/spdk_pid871850 00:29:56.130 Removing: /var/run/dpdk/spdk_pid872215 00:29:56.130 Removing: /var/run/dpdk/spdk_pid872518 00:29:56.130 Removing: /var/run/dpdk/spdk_pid872609 00:29:56.130 Removing: /var/run/dpdk/spdk_pid873135 00:29:56.130 Removing: /var/run/dpdk/spdk_pid873322 00:29:56.130 Removing: /var/run/dpdk/spdk_pid873707 00:29:56.130 Removing: /var/run/dpdk/spdk_pid874035 00:29:56.130 Removing: /var/run/dpdk/spdk_pid874328 00:29:56.130 Removing: /var/run/dpdk/spdk_pid874423 00:29:56.130 Removing: /var/run/dpdk/spdk_pid874788 00:29:56.130 Removing: /var/run/dpdk/spdk_pid874806 00:29:56.130 Removing: /var/run/dpdk/spdk_pid875423 00:29:56.130 Removing: /var/run/dpdk/spdk_pid875632 00:29:56.130 Removing: /var/run/dpdk/spdk_pid876021 00:29:56.130 Removing: /var/run/dpdk/spdk_pid876401 00:29:56.130 Removing: /var/run/dpdk/spdk_pid876465 00:29:56.130 Removing: /var/run/dpdk/spdk_pid876841 00:29:56.130 Removing: /var/run/dpdk/spdk_pid877199 00:29:56.130 Removing: /var/run/dpdk/spdk_pid877490 00:29:56.130 Removing: /var/run/dpdk/spdk_pid877729 00:29:56.130 Removing: /var/run/dpdk/spdk_pid877967 00:29:56.130 Removing: /var/run/dpdk/spdk_pid878320 00:29:56.130 Removing: /var/run/dpdk/spdk_pid878683 00:29:56.130 Removing: /var/run/dpdk/spdk_pid879045 00:29:56.130 Removing: /var/run/dpdk/spdk_pid879399 00:29:56.130 Removing: /var/run/dpdk/spdk_pid879658 00:29:56.130 Removing: /var/run/dpdk/spdk_pid879901 00:29:56.130 Removing: /var/run/dpdk/spdk_pid880164 00:29:56.130 Removing: /var/run/dpdk/spdk_pid880522 00:29:56.130 Removing: /var/run/dpdk/spdk_pid880878 00:29:56.130 Removing: /var/run/dpdk/spdk_pid881239 00:29:56.130 Removing: /var/run/dpdk/spdk_pid881597 00:29:56.130 Removing: /var/run/dpdk/spdk_pid881918 00:29:56.130 Removing: /var/run/dpdk/spdk_pid882171 00:29:56.130 Removing: /var/run/dpdk/spdk_pid882422 00:29:56.130 Removing: /var/run/dpdk/spdk_pid882724 00:29:56.130 Removing: /var/run/dpdk/spdk_pid883087 00:29:56.130 Removing: /var/run/dpdk/spdk_pid883372 00:29:56.130 Removing: /var/run/dpdk/spdk_pid883830 00:29:56.130 Removing: /var/run/dpdk/spdk_pid888438 00:29:56.130 Removing: /var/run/dpdk/spdk_pid942279 00:29:56.130 Removing: /var/run/dpdk/spdk_pid947636 00:29:56.130 Removing: /var/run/dpdk/spdk_pid958240 00:29:56.130 Removing: /var/run/dpdk/spdk_pid965264 00:29:56.130 Removing: /var/run/dpdk/spdk_pid970060 00:29:56.130 Removing: /var/run/dpdk/spdk_pid970875 00:29:56.392 Removing: /var/run/dpdk/spdk_pid984867 00:29:56.392 Removing: /var/run/dpdk/spdk_pid984876 00:29:56.392 Removing: /var/run/dpdk/spdk_pid985882 00:29:56.392 Removing: /var/run/dpdk/spdk_pid986880 00:29:56.392 Removing: /var/run/dpdk/spdk_pid987888 00:29:56.392 Removing: /var/run/dpdk/spdk_pid988766 00:29:56.392 Removing: /var/run/dpdk/spdk_pid988888 00:29:56.392 Removing: /var/run/dpdk/spdk_pid989115 00:29:56.392 Removing: /var/run/dpdk/spdk_pid989233 00:29:56.392 Removing: /var/run/dpdk/spdk_pid989236 00:29:56.392 Removing: /var/run/dpdk/spdk_pid990248 00:29:56.392 Removing: /var/run/dpdk/spdk_pid991257 00:29:56.392 Removing: /var/run/dpdk/spdk_pid992265 00:29:56.392 Removing: /var/run/dpdk/spdk_pid992935 00:29:56.392 Removing: /var/run/dpdk/spdk_pid992952 00:29:56.392 Removing: /var/run/dpdk/spdk_pid993276 00:29:56.392 Removing: /var/run/dpdk/spdk_pid994710 00:29:56.392 Removing: /var/run/dpdk/spdk_pid996116 00:29:56.392 Clean 00:29:56.392 15:06:39 -- common/autotest_common.sh@1437 -- # return 0 00:29:56.392 15:06:39 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:29:56.392 15:06:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:56.392 15:06:39 -- common/autotest_common.sh@10 -- # set +x 00:29:56.653 15:06:39 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:29:56.653 15:06:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:56.653 15:06:39 -- common/autotest_common.sh@10 -- # set +x 00:29:56.653 15:06:39 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:56.653 15:06:39 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:29:56.653 15:06:39 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:29:56.653 15:06:39 -- spdk/autotest.sh@389 -- # hash lcov 00:29:56.653 15:06:39 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:56.653 15:06:39 -- spdk/autotest.sh@391 -- # hostname 00:29:56.653 15:06:39 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:29:56.653 geninfo: WARNING: invalid characters removed from testname! 00:30:23.224 15:07:02 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:23.224 15:07:05 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:24.604 15:07:07 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:25.983 15:07:08 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:27.889 15:07:10 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:29.797 15:07:12 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:31.178 15:07:13 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:31.440 15:07:13 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:31.440 15:07:13 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:31.440 15:07:13 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:31.440 15:07:13 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:31.440 15:07:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.440 15:07:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.440 15:07:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.440 15:07:13 -- paths/export.sh@5 -- $ export PATH 00:30:31.440 15:07:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.440 15:07:13 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:31.440 15:07:13 -- common/autobuild_common.sh@435 -- $ date +%s 00:30:31.440 15:07:13 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714136833.XXXXXX 00:30:31.440 15:07:13 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714136833.WaFqzC 00:30:31.440 15:07:13 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:30:31.440 15:07:13 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:30:31.440 15:07:13 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:31.440 15:07:13 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:31.440 15:07:13 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:31.440 15:07:13 -- common/autobuild_common.sh@451 -- $ get_config_params 00:30:31.440 15:07:13 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:30:31.440 15:07:13 -- common/autotest_common.sh@10 -- $ set +x 00:30:31.440 15:07:13 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:30:31.440 15:07:13 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:30:31.440 15:07:13 -- pm/common@17 -- $ local monitor 00:30:31.440 15:07:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:31.440 15:07:13 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1291277 00:30:31.440 15:07:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:31.440 15:07:13 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1291279 00:30:31.440 15:07:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:31.440 15:07:13 -- pm/common@21 -- $ date +%s 00:30:31.440 15:07:13 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1291281 00:30:31.440 15:07:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:31.440 15:07:13 -- pm/common@21 -- $ date +%s 00:30:31.440 15:07:13 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1291284 00:30:31.440 15:07:13 -- pm/common@26 -- $ sleep 1 00:30:31.440 15:07:13 -- pm/common@21 -- $ date +%s 00:30:31.440 15:07:13 -- pm/common@21 -- $ date +%s 00:30:31.440 15:07:13 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714136833 00:30:31.440 15:07:13 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714136833 00:30:31.440 15:07:13 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714136833 00:30:31.440 15:07:13 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714136833 00:30:31.440 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714136833_collect-vmstat.pm.log 00:30:31.440 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714136833_collect-cpu-load.pm.log 00:30:31.440 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714136833_collect-bmc-pm.bmc.pm.log 00:30:31.440 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714136833_collect-cpu-temp.pm.log 00:30:32.430 15:07:14 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:30:32.430 15:07:14 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:30:32.430 15:07:14 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:32.430 15:07:14 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:32.430 15:07:14 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:32.430 15:07:14 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:32.430 15:07:14 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:32.430 15:07:14 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:32.430 15:07:14 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:32.430 15:07:14 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:32.430 15:07:14 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:32.430 15:07:14 -- pm/common@30 -- $ signal_monitor_resources TERM 00:30:32.430 15:07:14 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:30:32.430 15:07:14 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:32.430 15:07:14 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:32.430 15:07:14 -- pm/common@45 -- $ pid=1291294 00:30:32.430 15:07:14 -- pm/common@52 -- $ sudo kill -TERM 1291294 00:30:32.430 15:07:15 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:32.430 15:07:15 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:32.430 15:07:15 -- pm/common@45 -- $ pid=1291296 00:30:32.430 15:07:15 -- pm/common@52 -- $ sudo kill -TERM 1291296 00:30:32.430 15:07:15 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:32.430 15:07:15 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:32.430 15:07:15 -- pm/common@45 -- $ pid=1291297 00:30:32.430 15:07:15 -- pm/common@52 -- $ sudo kill -TERM 1291297 00:30:32.694 15:07:15 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:32.694 15:07:15 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:32.694 15:07:15 -- pm/common@45 -- $ pid=1291299 00:30:32.694 15:07:15 -- pm/common@52 -- $ sudo kill -TERM 1291299 00:30:32.694 + [[ -n 736173 ]] 00:30:32.694 + sudo kill 736173 00:30:32.705 [Pipeline] } 00:30:32.725 [Pipeline] // stage 00:30:32.730 [Pipeline] } 00:30:32.746 [Pipeline] // timeout 00:30:32.751 [Pipeline] } 00:30:32.767 [Pipeline] // catchError 00:30:32.772 [Pipeline] } 00:30:32.788 [Pipeline] // wrap 00:30:32.794 [Pipeline] } 00:30:32.805 [Pipeline] // catchError 00:30:32.813 [Pipeline] stage 00:30:32.815 [Pipeline] { (Epilogue) 00:30:32.828 [Pipeline] catchError 00:30:32.829 [Pipeline] { 00:30:32.844 [Pipeline] echo 00:30:32.845 Cleanup processes 00:30:32.849 [Pipeline] sh 00:30:33.134 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:33.134 1291390 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:33.134 1291851 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:33.148 [Pipeline] sh 00:30:33.435 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:33.435 ++ grep -v 'sudo pgrep' 00:30:33.435 ++ awk '{print $1}' 00:30:33.435 + sudo kill -9 1291390 00:30:33.448 [Pipeline] sh 00:30:33.732 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:43.734 [Pipeline] sh 00:30:44.021 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:44.021 Artifacts sizes are good 00:30:44.034 [Pipeline] archiveArtifacts 00:30:44.041 Archiving artifacts 00:30:44.218 [Pipeline] sh 00:30:44.502 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:44.518 [Pipeline] cleanWs 00:30:44.528 [WS-CLEANUP] Deleting project workspace... 00:30:44.528 [WS-CLEANUP] Deferred wipeout is used... 00:30:44.535 [WS-CLEANUP] done 00:30:44.537 [Pipeline] } 00:30:44.559 [Pipeline] // catchError 00:30:44.572 [Pipeline] sh 00:30:44.865 + logger -p user.info -t JENKINS-CI 00:30:44.875 [Pipeline] } 00:30:44.892 [Pipeline] // stage 00:30:44.897 [Pipeline] } 00:30:44.914 [Pipeline] // node 00:30:44.920 [Pipeline] End of Pipeline 00:30:44.948 Finished: SUCCESS